WorldWideScience

Sample records for constraining emission models

  1. Constraining Methane Emissions from Natural Gas Production in Northeastern Pennsylvania Using Aircraft Observations and Mesoscale Modeling

    Science.gov (United States)

    Barkley, Z.; Davis, K.; Lauvaux, T.; Miles, N.; Richardson, S.; Martins, D. K.; Deng, A.; Cao, Y.; Sweeney, C.; Karion, A.; Smith, M. L.; Kort, E. A.; Schwietzke, S.

    2015-12-01

    Leaks in natural gas infrastructure release methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated fugitive emission rate associated with the production phase varies greatly between studies, hindering our understanding of the natural gas energy efficiency. This study presents a new application of inverse methodology for estimating regional fugitive emission rates from natural gas production. Methane observations across the Marcellus region in northeastern Pennsylvania were obtained during a three week flight campaign in May 2015 performed by a team from the National Oceanic and Atmospheric Administration (NOAA) Global Monitoring Division and the University of Michigan. In addition to these data, CH4 observations were obtained from automobile campaigns during various periods from 2013-2015. An inventory of CH4 emissions was then created for various sources in Pennsylvania, including coalmines, enteric fermentation, industry, waste management, and unconventional and conventional wells. As a first-guess emission rate for natural gas activity, a leakage rate equal to 2% of the natural gas production was emitted at the locations of unconventional wells across PA. These emission rates were coupled to the Weather Research and Forecasting model with the chemistry module (WRF-Chem) and atmospheric CH4 concentration fields at 1km resolution were generated. Projected atmospheric enhancements from WRF-Chem were compared to observations, and the emission rate from unconventional wells was adjusted to minimize errors between observations and simulation. We show that the modeled CH4 plume structures match observed plumes downwind of unconventional wells, providing confidence in the methodology. In all cases, the fugitive emission rate was found to be lower than our first guess. In this initial emission configuration, each well has been assigned the same fugitive emission rate, which can potentially impair our ability to match the observed spatial variability

  2. Constraining the uncertainty in emissions over India with a regional air quality model evaluation

    Science.gov (United States)

    Karambelas, Alexandra; Holloway, Tracey; Kiesewetter, Gregor; Heyes, Chris

    2018-02-01

    To evaluate uncertainty in the spatial distribution of air emissions over India, we compare satellite and surface observations with simulations from the U.S. Environmental Protection Agency (EPA) Community Multi-Scale Air Quality (CMAQ) model. Seasonally representative simulations were completed for January, April, July, and October 2010 at 36 km × 36 km using anthropogenic emissions from the Greenhouse Gas-Air Pollution Interaction and Synergies (GAINS) model following version 5a of the Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants project (ECLIPSE v5a). We use both tropospheric columns from the Ozone Monitoring Instrument (OMI) and surface observations from the Central Pollution Control Board (CPCB) to closely examine modeled nitrogen dioxide (NO2) biases in urban and rural regions across India. Spatial average evaluation with satellite retrievals indicate a low bias in the modeled tropospheric column (-63.3%), which reflects broad low-biases in majority non-urban regions (-70.1% in rural areas) across the sub-continent to slightly lesser low biases reflected in semi-urban areas (-44.7%), with the threshold between semi-urban and rural defined as 400 people per km2. In contrast, modeled surface NO2 concentrations exhibit a slight high bias of +15.6% when compared to surface CPCB observations predominantly located in urban areas. Conversely, in examining extremely population dense urban regions with more than 5000 people per km2 (dense-urban), we find model overestimates in both the column (+57.8) and at the surface (+131.2%) compared to observations. Based on these results, we find that existing emission fields for India may overestimate urban emissions in densely populated regions and underestimate rural emissions. However, if we rely on model evaluation with predominantly urban surface observations from the CPCB, comparisons reflect model high biases, contradictory to the knowledge gained using satellite observations. Satellites thus

  3. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    Science.gov (United States)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-10-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength for BB aerosol sources. Our previous work shows that to first order, satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the smoke source strength. We now refine the satellite-snapshot method and investigate where applying simple multiplicative emission adjustment factors alone to the widely used Global Fire Emission Database version 3 emission inventory can achieve regional-scale consistency between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport model. The model and satellite AOD are compared globally, over a set of BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. Regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. We refine our approach to address physically based limitations of our earlier work (1) by expanding the number of fire cases from 124 to almost 900, (2) by using scaled reanalysis-model simulations to fill missing AOD retrievals in the MODIS observations, (3) by distinguishing the BB components of the total aerosol load from background aerosol in the near-source regions, and (4) by including emissions from fires too small to be identified explicitly in the satellite observations. The small-fire emission adjustment shows the complimentary nature of correcting for source strength and adding geographically distinct missing sources. Our analysis indicates that the method works best for fire cases where the BB fraction of total AOD is high, primarily evergreen or deciduous forests. In heavily polluted or agricultural burning regions, where smoke and background AOD values tend to be comparable, this approach

  4. CONSTRAINING RADIO EMISSION FROM MAGNETARS

    Energy Technology Data Exchange (ETDEWEB)

    Lazarus, P.; Kaspi, V. M.; Dib, R. [Department of Physics, Rutherford Physics Building, McGill University, 3600 University Street, Montreal, Quebec H3A 2T8 (Canada); Champion, D. J. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, 53121 Bonn (Germany); Hessels, J. W. T., E-mail: plazar@physics.mcgill.ca [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands)

    2012-01-10

    We report on radio observations of five magnetars and two magnetar candidates carried out at 1950 MHz with the Green Bank Telescope in 2006-2007. The data from these observations were searched for periodic emission and bright single pulses. Also, monitoring observations of magnetar 4U 0142+61 following its 2006 X-ray bursts were obtained. No radio emission was detected for any of our targets. The non-detections allow us to place luminosity upper limits of L{sub 1950} {approx}< 1.60 mJy kpc{sup 2} for periodic emission and L{sub 1950,single} {approx}< 7.6 Jy kpc{sup 2} for single pulse emission. These are the most stringent limits yet for the magnetars observed. The resulting luminosity upper limits together with previous results are discussed, as is the importance of further radio observations of radio-loud and radio-quiet magnetars.

  5. Emission constrained secure economic dispatch

    International Nuclear Information System (INIS)

    Arya, L.D.; Choube, S.C.; Kothari, D.P.

    1997-01-01

    This paper describes a methodology for secure economic operation of power system accounting emission constraint areawise as well as in totality. Davidon-Fletcher-Powell's method of optimization has been used. Inequality constraints are accounted for by a penalty function. Sensitivity coefficients have been used to evaluate the gradient vector as well as for the calculation of incremental transmission loss (ITL). AC load flow results are required in the beginning only. The algorithm has been tested on IEEE 14- and 25-bus test systems. (Author)

  6. Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model

    Science.gov (United States)

    Jourdain, L.; Kulawik, S. S.; Worden, H. M.; Pickering, K. E.; Worden, J.; Thompson, A. M.

    2010-01-01

    Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES) instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN) and the Long Range Lightning Detection Network (LRLDN) data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor) climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer) underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.

  7. Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging Data to Constrain a Positron Emission Tomography Kinetic Model: Theory and Simulations

    Directory of Open Access Journals (Sweden)

    Jacob U. Fluckiger

    2013-01-01

    Full Text Available We show how dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI data can constrain a compartmental model for analyzing dynamic positron emission tomography (PET data. We first develop the theory that enables the use of DCE-MRI data to separate whole tissue time activity curves (TACs available from dynamic PET data into individual TACs associated with the blood space, the extravascular-extracellular space (EES, and the extravascular-intracellular space (EIS. Then we simulate whole tissue TACs over a range of physiologically relevant kinetic parameter values and show that using appropriate DCE-MRI data can separate the PET TAC into the three components with accuracy that is noise dependent. The simulations show that accurate blood, EES, and EIS TACs can be obtained as evidenced by concordance correlation coefficients >0.9 between the true and estimated TACs. Additionally, provided that the estimated DCE-MRI parameters are within 10% of their true values, the errors in the PET kinetic parameters are within approximately 20% of their true values. The parameters returned by this approach may provide new information on the transport of a tracer in a variety of dynamic PET studies.

  8. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    Science.gov (United States)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  9. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  10. Fire emissions constrained by the synergistic use of formaldehyde and glyoxal SCIAMACHY columns in a two-compound inverse modelling framework

    Science.gov (United States)

    Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.

    2008-12-01

    Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.

  11. Constraining the Dynamics of Periodic Behavior at Mt. Semeru, Indonesia, Combining Numerical Modeling and Field Measurements of Gas emission

    Science.gov (United States)

    Smekens, J.; Clarke, A. B.; De'Michieli Vitturi, M.; Moore, G. M.

    2012-12-01

    Mt. Semeru is one of the most active explosive volcanoes on the island of Java in Indonesia. The current eruption style consists of small but frequent explosions and/or gas releases (several times a day) accompanied by continuous lava effusion that sporadically produces block-and-ash flows down the SE flank of the volcano. Semeru presents a unique opportunity to investigate the magma ascent conditions that produce this kind of persistent periodic behavior and the coexistence of explosive and effusive eruptions. In this work we use DOMEFLOW, a 1.5D transient isothermal numerical model, to investigate the dynamics of lava extrusion at Semeru. Petrologic observations from tephra and ballistic samples collected at the summit help us constrain the initial conditions of the system. Preliminary model runs produced periodic lava extrusion and pulses of gas release at the vent, with a cycle period on the order of hours, even though a steady magma supply rate was prescribed at the bottom of the conduit. Enhanced shallow permeability implemented in the model appears to create a dense plug in the shallow subsurface, which in turn plays a critical role in creating and controlling the observed periodic behavior. We measured SO2 fluxes just above the vent, using a custom UV imaging system. The device consists of two high-sensitivity CCD cameras with narrow UV filters centered at 310 and 330 nm, and a USB2000+ spectrometer for calibration and distance correction. The method produces high-frequency flux series with an accurate determination of the wind speed and plume geometry. The model results, when combined with gas measurements, and measurements of sulfur in both the groundmass and melt inclusions in eruptive products, could be used to create a volatile budget of the system. Furthermore, a well-calibrated model of the system will ultimately allow the characteristic periodicity and corresponding gas flux to be used as a proxy for magma supply rate.

  12. Constrained CPn models

    International Nuclear Information System (INIS)

    Latorre, J.I.; Luetken, C.A.

    1988-11-01

    We construct a large new class of two dimensional sigma models with Kaehler target spaces which are algebraic manifolds realized as complete interactions in weighted CP n spaces. They are N=2 superconformally symmetric and particular choices of constraints give Calabi-Yau target spaces which are nontrivial string vacua. (orig.)

  13. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    Science.gov (United States)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better

  14. Joint Optimal Production Planning for Complex Supply Chains Constrained by Carbon Emission Abatement Policies

    OpenAIRE

    He, Longfei; Xu, Zhaoguang; Niu, Zhanwen

    2014-01-01

    We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optim...

  15. Constraining global methane emissions and uptake by ecosystems

    International Nuclear Information System (INIS)

    Spahni, R.; Wania, R.; Neef, L.; Van Weele, M.; Van Velthoven, P.; Pison, I.; Bousquet, P.

    2011-01-01

    Natural methane (CH 4 ) emissions from wet ecosystems are an important part of today's global CH 4 budget. Climate affects the exchange of CH 4 between ecosystems and the atmosphere by influencing CH 4 production, oxidation, and transport in the soil. The net CH 4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH 4 emissions for different ecosystems: northern peat-lands (45 degrees-90 degrees N), naturally inundated wetlands (60 degrees S-45 degrees N), rice agriculture and wet mineral soils. Mineral soils are a potential CH 4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003-2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH 4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a significant reduction in the emissions from northern peat-lands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we diagnose model parameters in LPJ-WHyMe and simulate the surface exchange of CH 4 over the period 1990-2008. Over the whole period we infer an increase of global ecosystem CH 4 emissions of +1.11 TgCH 4 yr -1 , not considering potential additional changes in wetland extent. The increase in simulated CH 4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land temperature and in atmospheric carbon dioxide that were used as input. The long term decline of the atmospheric CH 4 growth rate from 1990

  16. How will greenhouse gas emissions from motor vehicles be constrained in China around 2030?

    International Nuclear Information System (INIS)

    Zheng, Bo; Zhang, Qiang; Borken-Kleefeld, Jens; Huo, Hong; Guan, Dabo; Klimont, Zbigniew; Peters, Glen P.; He, Kebin

    2015-01-01

    Highlights: • We build a projection model to predict vehicular GHG emissions on provincial basis. • Fuel efficiency gains cannot constrain vehicle GHGs in major southern provinces. • We propose an integrated policy set through sensitivity analysis of policy options. • The policy set will peak GHG emissions of 90% provinces and whole China by 2030. - Abstract: Increasing emissions from road transportation endanger China’s objective to reduce national greenhouse gas (GHG) emissions. The unconstrained growth of vehicle GHG emissions are mainly caused by the insufficient improvement of energy efficiency (kilometers traveled per unit energy use) under current policies, which cannot offset the explosion of vehicle activity in China, especially the major southern provinces. More stringent polices are required to decline GHG emissions in these provinces, and thereby help to constrain national total emissions. In this work, we make a provincial-level projection for vehicle growth, energy demand and GHG emissions to evaluate vehicle GHG emission trends under various policy options in China and determine the way to constrain national emissions. Through sensitivity analysis of various single policies, we propose an integrated policy set to assure the objective of peak national vehicle GHG emissions be achieved around 2030. The integrated policy involves decreasing the use of urban light-duty vehicles by 25%, improving fuel economy by 25% by 2035 comparing 2020, and promoting electric vehicles and biofuels. The stringent new policies would allow China to constrain GHG emissions from road transport sector around 2030. This work provides a perspective to understand vehicle GHG emission growth patterns in China’s provinces, and proposes a strong policy combination to constrain national GHG emissions, which can support the achievement of peak GHG emissions by 2030 promised by the Chinese government

  17. Coherent and incoherent giant dipole resonance γ-ray emission induced by heavy ion collisions: Study of the 40Ca+48Ca system by means of the constrained molecular dynamics model

    International Nuclear Information System (INIS)

    Papa, Massimo; Cardella, Giuseppe; Bonanno, Antonio; Pappalardo, Giuseppe; Rizzo, Francesca; Amorini, Francesca; Bonasera, Aldo; Di Pietro, Alessia; Figuera, Pier Paolo; Tudisco, Salvatore; Maruyama, Toshiki

    2003-01-01

    Coherent and incoherent dipolar γ-ray emission is studied in a fully dynamical approach by means of the constrained molecular dynamics model. The study is focused on the system 40 Ca+ 48 Ca for which recently experimental data have been collected at 25 MeV/nucleon. The approach allows us to explain the experimental results in a self-consistent way without using statistical or hybrid models. Moreover, calculations performed at higher energy show interesting correlations between the fragment formation process, the degree of collectivity, and the coherence degree of the γ-ray emission process

  18. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  19. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  20. Fast emission estimates in China and South Africa constrained by satellite observations

    Science.gov (United States)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e

  1. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  2. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  3. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  4. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40)

    Science.gov (United States)

    Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol

  5. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2017-06-01

    Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1

  6. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  7. Constraining East Asian CO2 emissions with GOSAT retrievals: methods and policy implications

    Science.gov (United States)

    Shim, C.; Henze, D. K.; Deng, F.

    2017-12-01

    The world largest CO2 emissions are from East Asia. However, there are large uncertainties in CO2 emission inventories, mainly because of imperfections in bottom-up statistics and a lack of observations for validating emission fluxes, particularly over China. Here we tried to constrain East Asian CO2 emissions with GOSAT retrievals applying 4-Dvar GEOS-Chem and its adjoint model. We applied the inversion to only the cold season (November - February) in 2009 - 2010 since the summer monsoon and greater transboundary impacts in spring and fall greatly reduced the GOSAT retrievals. In the cold season, the a posteriori CO2 emissions over East Asia generally higher by 5 - 20%, particularly Northeastern China shows intensively higher in a posteriori emissions ( 20%), where the Chinese government is recently focusing on mitigating the air pollutants. In another hand, a posteriori emissions from Southern China are lower 10 - 25%. A posteriori emissions in Korea and Japan are mostly higher by 10 % except over Kyushu region. With our top-down estimates with 4-Dvar CO2 inversion, we will evaluate the current regional CO2 emissions inventories and potential uncertainties in the sectoral emissions. This study will help understand the quantitative information on anthropogenic CO2 emissions over East Asia and will give policy implications for the mitigation targets.

  8. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  9. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  10. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  11. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  12. Joint Optimal Production Planning for Complex Supply Chains Constrained by Carbon Emission Abatement Policies

    Directory of Open Access Journals (Sweden)

    Longfei He

    2014-01-01

    Full Text Available We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optimization algorithm to obtain joint optimal production quantities combination for maximizing overall profit under regulatory policies, respectively. Furthermore, numerical studies by featuring exponentially distributed demand compare systemwide performances in various scenarios. We build the “carbon emission elasticity of profit (CEEP” index as a metric to evaluate the impact of regulatory policies on both chainwide emissions and profit. Our results manifest that by facilitating the mandatory emission cap in proper installation within the network one can balance well effective emission reduction and associated acceptable profit loss. The outcome that CEEP index when implementing Carbon emission tax is elastic implies that the scale of profit loss is greater than that of emission reduction, which shows that this policy is less effective than mandatory cap from industry standpoint at least.

  13. Constraining CO2 tower measurements in an inhomogeneous area with anthropogenic emissions using a combination of car-mounted instrument campaigns, aircraft profiles, transport modeling and neural networks

    Science.gov (United States)

    Schmidt, A.; Rella, C.; Conley, S. A.; Goeckede, M.; Law, B. E.

    2013-12-01

    The NOAA CO2 observation network in Oregon has been enhanced by 3 new towers in 2012. The tallest tower in the network (270 m), located in Silverton in the Willamette Valley is affected by anthropogenic emissions from Oregon's busiest traffic routes and urban centers. In summer 2012, we conducted a measurement campaign using a car-mounted PICARRO CRDS CO2/CO analyzer. Over 3 days, the instrument was driven over 1000 miles throughout the northwestern portion of Oregon measuring the CO/ CO2 ratios on main highways, back roads in forests, agricultural sites, and Oregon's biggest urban centers. By geospatial analyses we obtained ratios of CO/ CO2 over distinct land cover types divided into 10 classes represented in the study area. Using the coupled WRF-STILT transport model we calculated the footprints of nearby CO/ CO2 observation towers for the corresponding days of mobile road measurements. Spatiotemporally assigned source areas in combination with the land use classification were then used to calculate specific ratios of CO (anthropogenic origins) and CO2 to separate the anthropogenic portion of CO2 from the mixing ratio time series measured at the tower in Silverton. The WRF modeled boundary layer heights used in out study showed some differences compared to the boundary layer heights derived from profile data of wind, temperature, and humidity measured with an airplane in August, September, and November 2012, repeatedly over 5 tower locations. A Bayesian Regularized Artificial Neural Network (BRANN) was used to correct the boundary layer height calculated with WRF with a temporal resolution of 20 minutes and a horizontal resolution of 4 km. For that purpose the BRANN was trained using height profile data from the flight campaigns and spatiotemporally corresponding meteorological data from WRF. Our analyses provide information needed to run inverse modeling of CO2 exchange in an area that is affected by sources that cannot easily be considered by biospheric models

  14. Constraining estimates of methane emissions from Arctic permafrost regions with CARVE

    Science.gov (United States)

    Chang, R. Y.; Karion, A.; Sweeney, C.; Henderson, J.; Mountain, M.; Eluszkiewicz, J.; Luus, K. A.; Lin, J. C.; Dinardo, S.; Miller, C. E.; Wofsy, S. C.

    2013-12-01

    Permafrost in the Arctic contains large carbon pools that are currently non-labile, but can be released to the atmosphere as polar regions warm. In order to predict future climate scenarios, we need to understand the emissions of these greenhouse gases under varying environmental conditions. This study presents in-situ measurements of methane made on board an aircraft during the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE), which sampled over the permafrost regions of Alaska. Using measurements from May to September 2012, seasonal emission rate estimates of methane from tundra are constrained using the Stochastic Time-Inverted Lagrangian Transport model, a Lagrangian particle dispersion model driven by custom polar-WRF fields. Preliminary results suggest that methane emission rates have not greatly increased since the Arctic Boundary Layer Experiment conducted in southwest Alaska in 1988.

  15. Land-use and land-cover change carbon emissions between 1901 and 2012 constrained by biomass observations

    Directory of Open Access Journals (Sweden)

    W. Li

    2017-11-01

    Full Text Available The use of dynamic global vegetation models (DGVMs to estimate CO2 emissions from land-use and land-cover change (LULCC offers a new window to account for spatial and temporal details of emissions and for ecosystem processes affected by LULCC. One drawback of LULCC emissions from DGVMs, however, is lack of observation constraint. Here, we propose a new method of using satellite- and inventory-based biomass observations to constrain historical cumulative LULCC emissions (ELUCc from an ensemble of nine DGVMs based on emerging relationships between simulated vegetation biomass and ELUCc. This method is applicable on the global and regional scale. The original DGVM estimates of ELUCc range from 94 to 273 PgC during 1901–2012. After constraining by current biomass observations, we derive a best estimate of 155 ± 50 PgC (1σ Gaussian error. The constrained LULCC emissions are higher than prior DGVM values in tropical regions but significantly lower in North America. Our emergent constraint approach independently verifies the median model estimate by biomass observations, giving support to the use of this estimate in carbon budget assessments. The uncertainty in the constrained ELUCc is still relatively large because of the uncertainty in the biomass observations, and thus reduced uncertainty in addition to increased accuracy in biomass observations in the future will help improve the constraint. This constraint method can also be applied to evaluate the impact of land-based mitigation activities.

  16. Economic impact assessment and operational decision making in emission and transmission constrained electricity markets

    International Nuclear Information System (INIS)

    Nanduri, Vishnu; Kazemzadeh, Narges

    2012-01-01

    Highlights: ► We develop a bilevel game-theoretic model for allowance and electricity markets. ► We solve the model using a reinforcement learning algorithm. ► Model accounts for transmission constraints, cap-and-trade constraints. ► Study demonstrated on 9-bus electric power network. ► Obtain insights about supply shares, impact of transmission constraints, and cost pass through. -- Abstract: Carbon constrained electricity markets are a reality in 10 northeastern states and California in the US, as well as the European Union. Close to a Billion US Dollars have been spent by entities (mainly generators) in the Regional Greenhouse Gas Initiative in procuring CO 2 allowances to meet binding emissions restrictions. In the near future, there are expected to be significant impacts due to the cap-and-trade program, especially when the cap stringency increases. In this research we develop a bilevel, complete-information, matrix game-theoretic model to assess the economic impact and make operational decisions in carbon-constrained restructured electricity markets. Our model is solved using a reinforcement learning approach, which takes into account the learning and adaptive nature of market participants. Our model also accounts for all the power systems constraints via a DC-OPF problem. We demonstrate the working of the model and compute various economic impact indicators such as supply shares, cost pass-through, social welfare, profits, allowance prices, and electricity prices. Results from a 9-bus power network are presented.

  17. High-energy gamma-ray emission from solar flares: Constraining the accelerated proton spectrum

    Science.gov (United States)

    Alexander, David; Dunphy, Philip P.; Mackinnon, Alexander L.

    1994-01-01

    Using a multi-component model to describe the gamma-ray emission, we investigate the flares of December 16, 1988 and March 6, 1989 which exhibited unambiguous evidence of neutral pion decay. The observations are then combined with theoretical calculations of pion production to constrain the accelerated proton spectra. The detection of pi(sup 0) emission alone can indicate much about the energy distribution and spectral variation of the protons accelerated to pion producing energies. Here both the intensity and detailed spectral shape of the Doppler-broadened pi(sup 0) decay feature are used to determine the spectral form of the accelerated proton energy distribution. The Doppler width of this gamma-ray emission provides a unique diagnostic of the spectral shape at high energies, independent of any normalisation. To our knowledge, this is the first time that this diagnostic has been used to constrain the proton spectra. The form of the energetic proton distribution is found to be severely limited by the observed intensity and Doppler width of the pi(sup 0) decay emission, demonstrating effectively the diagnostic capabilities of the pi(sup 0) decay gamma-rays. The spectral index derived from the gamma-ray intensity is found to be much harder than that derived from the Doppler width. To reconcile this apparent discrepancy we investigate the effects of introducing a high-energy cut-off in the accelerated proton distribution. With cut-off energies of around 0.5-0.8 GeV and relatively hard spectra, the observed intensities and broadening can be reproduced with a single energetic proton distribution above the pion production threshold.

  18. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  19. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  20. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  1. Emissions Modeling Clearinghouse

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions Modeling Clearinghouse (EMCH) supports and promotes emissions modeling activities both internal and external to the EPA. Through this site, the EPA...

  2. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  3. Risk-constrained self-scheduling of a fuel and emission constrained power producer using rolling window procedure

    International Nuclear Information System (INIS)

    Kazempour, S. Jalal; Moghaddam, Mohsen Parsa

    2011-01-01

    This work addresses a relevant methodology for self-scheduling of a price-taker fuel and emission constrained power producer in day-ahead correlated energy, spinning reserve and fuel markets to achieve a trade-off between the expected profit and the risk versus different risk levels based on Markowitz's seminal work in the area of portfolio selection. Here, a set of uncertainties including price forecasting errors and available fuel uncertainty are considered. The latter uncertainty arises because of uncertainties in being called for reserve deployment in the spinning reserve market and availability of power plant. To tackle the price forecasting errors, variances of energy, spinning reserve and fuel prices along with their covariances which are due to markets correlation are taken into account using relevant historical data. In order to tackle available fuel uncertainty, a framework for self-scheduling referred to as rolling window is proposed. This risk-constrained self-scheduling framework is therefore formulated and solved as a mixed-integer non-linear programming problem. Furthermore, numerical results for a case study are discussed. (author)

  4. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  5. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  6. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  7. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  8. Constraining composite Higgs models using LHC data

    Science.gov (United States)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  9. Constraining star formation through redshifted CO and CII emission in archival CMB data

    Science.gov (United States)

    Switzer, Eric

    cross-power with galaxy surveys directly constrains the redshifted line emission. Residual foregrounds and interlopers increase errors but do not add bias. There are 300 resolution elements of the 7 degree FIRAS top-hat inside the BOSS quasar survey, spanning 66 spectral pixels to z 2. While FIRAS noise per voxel is 200 times brighter than the expected peak cosmological CII emission, strt-N averaging of spatial and spectral modes above results in a gain of 140. Intensity mapping is in its infancy, with predictions for surface brightness of line emission ranging over an order of magnitude, and limited knowledge of the intensity-weighted bias. Even if only upper bounds are possible, they complement existing measurements of individual galaxies, which can constitute a lower bound because they measure only a portion of the luminosity function. FIRAS and Planck provide unique opportunities to pursue CII and CO intensity mapping with well-characterized instruments that overlap with galaxy surveys in angular coverage and redshift. We will re-analyze the FIRAS data to optimize sensitivity and robustness, developing a spectral line response model, splitting the data into sub-missions to isolate noise properties, and re- evaluating data cuts. The tools and results here will support future survey concepts with significantly lower noise, such as PIXIE, PRISM, SPHEREX and proposed suborbital experiments designed specifically for intensity mapping. There is a growing appreciation that many phenomena could lie just below the published FIRAS bounds. The proposed work is an early step toward this new science.

  10. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  11. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  12. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  13. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei

    2015-01-01

    Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....

  14. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  15. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  16. Instantaneous wave emission model

    International Nuclear Information System (INIS)

    Kruer, W.L.

    1970-12-01

    A useful treatment of electrostatic wave emission by fast particles in a plasma is given. First, the potential due to a fast particle is expressed as a simple integration over the particle orbit; several interesting results readily follow. The potential in the wake of an accelerating particle is shown to be essentially that produced through local excitation of the plasma by the particle free-streaming about its instantaneous orbit. Application is made to one dimension, and it is shown that the wave emission and adsorption synchronize to the instantaneous velocity distribution function. Guided by these calculations, we then formulate a test particle model for computing the instantaneous wave emission by fast particles in a Vlasov plasma. This model lends itself to physical interpretation and provides a direct approach to many problems. By adopting a Fokker-Planck description for the particle dynamics, we calculate the broadening of the wave-particle resonance due to velocity diffusion and drag

  17. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  18. Simulations of atmospheric methane for Cape Grim, Tasmania, to constrain southeastern Australian methane emissions

    Directory of Open Access Journals (Sweden)

    Z. M. Loh

    2015-01-01

    Full Text Available This study uses two climate models and six scenarios of prescribed methane emissions to compare modelled and observed atmospheric methane between 1994 and 2007, for Cape Grim, Australia (40.7° S, 144.7° E. The model simulations follow the TransCom-CH4 protocol and use the Australian Community Climate and Earth System Simulator (ACCESS and the CSIRO Conformal-Cubic Atmospheric Model (CCAM. Radon is also simulated and used to reduce the impact of transport differences between the models and observations. Comparisons are made for air samples that have traversed the Australian continent. All six emission scenarios give modelled concentrations that are broadly consistent with those observed. There are three notable mismatches, however. Firstly, scenarios that incorporate interannually varying biomass burning emissions produce anomalously high methane concentrations at Cape Grim at times of large fire events in southeastern Australia, most likely due to the fire methane emissions being unrealistically input into the lowest model level. Secondly, scenarios with wetland methane emissions in the austral winter overestimate methane concentrations at Cape Grim during wintertime while scenarios without winter wetland emissions perform better. Finally, all scenarios fail to represent a~methane source in austral spring implied by the observations. It is possible that the timing of wetland emissions in the scenarios is incorrect with recent satellite measurements suggesting an austral spring (September–October–November, rather than winter, maximum for wetland emissions.

  19. High estimates of supply constrained emissions scenarios for long-term climate risk assessment

    International Nuclear Information System (INIS)

    Ward, James D.; Mohr, Steve H.; Myers, Baden R.; Nel, Willem P.

    2012-01-01

    The simulated effects of anthropogenic global warming have become important in many fields and most models agree that significant impacts are becoming unavoidable in the face of slow action. Improvements to model accuracy rely primarily on the refinement of parameter sensitivities and on plausible future carbon emissions trajectories. Carbon emissions are the leading cause of global warming, yet current considerations of future emissions do not consider structural limits to fossil fuel supply, invoking a wide range of uncertainty. Moreover, outdated assumptions regarding the future abundance of fossil energy could contribute to misleading projections of both economic growth and climate change vulnerability. Here we present an easily replicable mathematical model that considers fundamental supply-side constraints and demonstrate its use in a stochastic analysis to produce a theoretical upper limit to future emissions. The results show a significant reduction in prior uncertainty around projected long term emissions, and even assuming high estimates of all fossil fuel resources and high growth of unconventional production, cumulative emissions tend to align to the current medium emissions scenarios in the second half of this century. This significant finding provides much-needed guidance on developing relevant emissions scenarios for long term climate change impact studies. - Highlights: ► GHG emissions from conventional and unconventional fossil fuels modelled nationally. ► Assuming worst-case: large resource, high growth, rapid uptake of unconventional. ► Long-term cumulative emissions align well with the SRES medium emissions scenario. ► High emissions are unlikely to be sustained through the second half of this century. ► Model designed to be easily extended to test other scenarios e.g. energy shortages.

  20. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  1. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  2. Constraining atmospheric ammonia emissions through new observations with an open-path, laser-based sensor

    Science.gov (United States)

    Sun, Kang

    emission estimates. Finally, NH3 observations from the TES instrument on NASA Aura satellite were validated with mobile measurements and aircraft observations. Improved validations will help to constrain NH3 emissions at continental to global scales. Ultimately, these efforts will improve the understanding of NH3 emissions from all scales, with implications on the global nitrogen cycle and atmospheric chemistry-climate interactions.

  3. Constraining viscous dark energy models with the latest cosmological data

    Science.gov (United States)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  4. Constraining viscous dark energy models with the latest cosmological data

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Yan, Yang-Jie; Meng, Xin-He [Nankai University, Department of Physics, Tianjin (China)

    2017-10-15

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H{sub 0} tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios. (orig.)

  5. Multi-Sensor Constrained Time Varying Emissions Estimation of Black Carbon: Attributing Urban and Fire Sources Globally

    Science.gov (United States)

    Cohen, J. B.

    2015-12-01

    The short lifetime and heterogeneous distribution of Black Carbon (BC) in the atmosphere leads to complex impacts on radiative forcing, climate, and health, and complicates analysis of its atmospheric processing and emissions. Two recent papers have estimated the global and regional emissions of BC using advanced statistical and computational methods. One used a Kalman Filter, including data from AERONET, NOAA, and other ground-based sources, to estimate global emissions of 17.8+/-5.6 Tg BC/year (with the increase attributable to East Asia, South Asia, Southeast Asia, and Eastern Europe - all regions which have had rapid urban, industrial, and economic expansion). The second additionally used remotely sensed measurements from MISR and a variance maximizing technique, uniquely quantifying fire and urban sources in Southeast Asia, as well as their large year-to-year variability over the past 12 years, leading to increases from 10% to 150%. These new emissions products, when run through our state-of-the art modelling system of chemistry, physics, transport, removal, radiation, and climate, match 140 ground stations and satellites better in both an absolute and a temporal sense. New work now further includes trace species measurements from OMI, which are used with the variance maximizing technique to constrain the types of emissions sources. Furthermore, land-use change and fire estimation products from MODIS are also included, which provide other constraints on the temporal and spatial nature of the variations of intermittent sources like fires or new permanent sources like expanded urbanization. This talk will introduce a new, top-down constrained, weekly varying BC emissions dataset, show that it produces a better fit with observations, and draw conclusions about the sources and impacts from urbanization one hand, and fires on another hand. Results specific to the Southeast and East Asia will demonstrate inter- and intra-annual variations, such as the function of

  6. Marine N2O Emissions From Nitrification and Denitrification Constrained by Modern Observations and Projected in Multimillennial Global Warming Simulations

    Science.gov (United States)

    Battaglia, G.; Joos, F.

    2018-01-01

    Nitrous oxide (N2O) is a potent greenhouse gas (GHG) and ozone destructing agent; yet global estimates of N2O emissions are uncertain. Marine N2O stems from nitrification and denitrification processes which depend on organic matter cycling and dissolved oxygen (O2). We introduce N2O as an obligate intermediate product of denitrification and as an O2-dependent by-product from nitrification in the Bern3D ocean model. A large model ensemble is used to probabilistically constrain modern and to project marine N2O production for a low (Representative Concentration Pathway (RCP)2.6) and high GHG (RCP8.5) scenario extended to A.D. 10,000. Water column N2O and surface ocean partial pressure N2O data serve as constraints in this Bayesian framework. The constrained median for modern N2O production is 4.5 (±1σ range: 3.0 to 6.1) Tg N yr-1, where 4.5% stems from denitrification. Modeled denitrification is 65.1 (40.9 to 91.6) Tg N yr-1, well within current estimates. For high GHG forcing, N2O production decreases by 7.7% over this century due to decreasing organic matter export and remineralization. Thereafter, production increases slowly by 21% due to widespread deoxygenation and high remineralization. Deoxygenation peaks in two millennia, and the global O2 inventory is reduced by a factor of 2 compared to today. Net denitrification is responsible for 7.8% of the long-term increase in N2O production. On millennial timescales, marine N2O emissions constitute a small, positive feedback to climate change. Our simulations reveal tight coupling between the marine carbon cycle, O2, N2O, and climate.

  7. A multiwavelength study of Swift GRB 060111B constraining the origin of its prompt optical emission

    Science.gov (United States)

    Stratta, G.; Pozanenko, A.; Atteia, J.-L.; Klotz, A.; Basa, S.; Gendre, B.; Verrecchia, F.; Boër, M.; Cutini, S.; Henze, M.; Holland, S.; Ibrahimov, M.; Ienna, F.; Khamitov, I.; Klose, S.; Rumyantsev, V.; Biryukov, V.; Sharapov, D.; Vachier, F.; Arnouts, S.; Perley, D. A.

    2009-09-01

    Context: The detection of bright optical emission measured with good temporal resolution during the prompt phase of GRB 060111Bmakes this GRB a rare event that is especially useful for constraining theories of the prompt emission. Aims: For this reason an extended multi-wavelength campaign was performed to further constrain the physical interpretation of the observations. Methods: In this work, we present the results obtained from our multi-wavelength campaign, as well as from the public Swift/BAT, XRT, and UVOT data. Results: We identified the host galaxy at R˜25 mag from deep R-band exposures taken 5 months after the trigger. Its featureless spectrum and brightness, as well as the non-detection of any associated supernova 16 days after the trigger, enabled us to constrain the distance scale of GRB 060111B11 within 0.4≤ z ≤3 in the most conservative case. The host galaxy spectral continuum is best fit with a redshift of z˜2, and other independent estimates converge to z˜1-2. From the analysis of the early afterglow SED, we find that non-negligible host galaxy dust extinction, in addition to the Galactic one, affects the observed flux in the optical regime. The extinction-corrected optical-to-gamma-ray SED during the prompt emission shows a flux density ratio Fγ/F_opt=10-2-10-4 with spectral index βγ,opt > βγ, strongly suggesting a separate origin of the optical and gamma-ray components. This result is supported by the lack of correlated behavior in the prompt emission light curves observed in the two energy domains. The temporal properties of the prompt optical emission observed during GRB 060111B11 and their similarities to other rapidly-observed events favor interpretation of this optical light as radiation from the reverse shock. Observations are in good agreement with theoretical expectations for a thick shell limit in slow cooling regime. The expected peak flux is consistent with the observed one corrected for the host extinction, likely

  8. Charged particle emission: the Child-Langmuir model

    International Nuclear Information System (INIS)

    Degond, P.; Raviart, P.A.

    1993-01-01

    The recent mathematical results concerning boundary emission modelling are reviewed with a synthetical view. The plane diode case is first studied; the Child-Langmuir model is then characterized as the limit to an absolutely non standard singular perturbation problem and is associated with approximate models (constrained and penalized models) which may be easily generalized in more realistic cases; an iterative solution method for the penalized problem is studied. The derived Child-Langmuir model is extended to the cylindrical diode case and to an arbitrary geometry case: constrained and penalized models related to the stationary Vlasov-Poisson equations are studied and extended to the Vlasov-Maxwell evolution equation general case

  9. NORTRIP emission model user guide

    Energy Technology Data Exchange (ETDEWEB)

    Denby, Rolstad Bruce

    2012-07-01

    The NORTRIP emission model has been developed at NILU, in conjunction with other Nordic institutes, to model non-exhaust traffic induced emissions. This short summary document explains how to run the NORTRIP model from the MATLAB environment or by using the executable user interface version. It also provides brief information on input files and the model architecture.(Author)

  10. PANCHROMATIC OBSERVATIONS OF THE TEXTBOOK GRB 110205A: CONSTRAINING PHYSICAL MECHANISMS OF PROMPT EMISSION AND AFTERGLOW

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, W. [Department of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109 (United States); Shen, R. F. [Department of Astronomy and Astrophysics, University of Toronto, Toronto, Ontario M5S 3H4 (Canada); Sakamoto, T. [Center for Research and Exploration in Space Science and Technology (CRESST), NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Beardmore, A. P. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); De Pasquale, M. [Mullard Space Science Laboratory, University College London, Holmbury Road, Holmbury St. Mary, Dorking RH5 6NT (United Kingdom); Wu, X. F.; Zhang, B. [Department of Physics and Astronomy, University of Nevada Las Vegas, Las Vegas, NV 89154 (United States); Gorosabel, J. [Instituto de Astrofisica de Andalucia (IAA-CSIC), 18008 Granada (Spain); Urata, Y. [Institute of Astronomy, National Central University, Chung-Li 32054, Taiwan (China); Sugita, S. [EcoTopia Science Institute, Nagoya University, Furo-cho, chikusa, Nagoya 464-8603 (Japan); Pozanenko, A. [Space Research Institute (IKI), 84/32 Profsoyuznaya St., Moscow 117997 (Russian Federation); Nissinen, M. [Taurus Hill Observatory, Haerkaemaeentie 88, 79480 Kangaslampi (Finland); Sahu, D. K. [CREST, Indian Institute of Astrophysics, Koramangala, Bangalore 560034 (India); Im, M. [Center for the Exploration of the Origin of the Universe, Department of Physics and Astronomy, FPRD, Seoul National University, Shillim-dong, San 56-1, Kwanak-gu, Seoul (Korea, Republic of); Ukwatta, T. N. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Andreev, M. [Terskol Branch of Institute of Astronomy of RAS, Kabardino-Balkaria Republic 361605 (Russian Federation); Klunko, E., E-mail: zwk@umich.edu, E-mail: rfshen@astro.utoronto.ca, E-mail: zhang@physics.unlv.edu [Institute of Solar-Terrestrial Physics, Lermontov St., 126a, Irkutsk 664033 (Russian Federation); and others

    2012-06-01

    We present a comprehensive analysis of a bright, long-duration (T{sub 90} {approx} 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb, and BOOTES telescopes when the gamma-ray burst (GRB) was still radiating in the {gamma}-ray band, with optical light curve showing correlation with {gamma}-ray data. Nearly 200 s of observations were obtained simultaneously from optical, X-ray, to {gamma}-ray (1 eV to 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution during the prompt emission phase. In particular, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard synchrotron emission model in the fast cooling regime. Shortly after prompt emission ({approx}1100 s), a bright (R = 14.0) optical emission hump with very steep rise ({alpha} {approx} 5.5) was observed, which we interpret as the reverse shock (RS) emission. It is the first time that the rising phase of an RS component has been closely observed. The full optical and X-ray afterglow light curves can be interpreted within the standard reverse shock (RS) + forward shock (FS) model. In general, the high-quality prompt and afterglow data allow us to apply the standard fireball model to extract valuable information, including the radiation mechanism (synchrotron), radius of prompt emission (R{sub GRB} {approx} 3 Multiplication-Sign 10{sup 13} cm), initial Lorentz factor of the outflow ({Gamma}{sub 0} {approx} 250), the composition of the ejecta (mildly magnetized), the collimation angle, and the total energy budget.

  11. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  12. Gluon field strength correlation functions within a constrained instanton model

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Esaibegyan, S.V.; Maximov, A.E.; Mikhailov, S.V.

    2000-01-01

    We suggest a constrained instanton (CI) solution in the physical QCD vacuum which is described by large-scale vacuum field fluctuations. This solution decays exponentially at large distances. It is stable only if the interaction of the instanton with the background vacuum field is small and additional constraints are introduced. The CI solution is explicitly constructed in the ansatz form, and the two-point vacuum correlator of the gluon field strengths is calculated in the framework of the effective instanton vacuum model. At small distances the results are qualitatively similar to the single instanton case; in particular, the D 1 invariant structure is small, which is in agreement with the lattice calculations. (orig.)

  13. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  14. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  15. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  16. Constraining Swiss Methane Emissions from Atmospheric Observations: Sensitivities and Temporal Development

    Science.gov (United States)

    Henne, Stephan; Leuenberger, Markus; Steinbacher, Martin; Eugster, Werner; Meinhardt, Frank; Bergamaschi, Peter; Emmenegger, Lukas; Brunner, Dominik

    2017-04-01

    Similar to other Western European countries, agricultural sources dominate the methane (CH4) emission budget in Switzerland. 'Bottom-up' estimates of these emissions are still connected with relatively large uncertainties due to considerable variability and uncertainties in observed emission factors for the underlying processes (e.g., enteric fermentation, manure management). Here, we present a regional-scale (˜300 x 200 km2) atmospheric inversion study of CH4 emissions in Switzerland making use of the recently established CarboCount-CH network of four stations on the Swiss Plateau as well as the neighbouring mountain-top sites Jungfraujoch and Schauinsland (Germany). Continuous observations from all CarboCount-CH sites are available since 2013. We use a high-resolution (7 x 7 km2) Lagrangian particle dispersion model (FLEXPART-COSMO) in connection with two different inversion systems (Bayesian and extended Kalman filter) to estimate spatially and temporally resolved CH4 emissions for the Swiss domain in the period 2013 to 2016. An extensive set of sensitivity inversions is used to assess the overall uncertainty of our inverse approach. In general we find good agreement of the total Swiss CH4 emissions between our 'top-down' estimate and the national 'bottom-up' reporting. In addition, a robust emission seasonality, with reduced winter time values, can be seen in all years. No significant trend or year-to-year variability was observed for the analysed four-year period, again in agreement with a very small downward trend in the national 'bottom-up' reporting. Special attention is given to the influence of boundary conditions as taken from different global scale model simulations (TM5, FLEXPART) and remote observations. We find that uncertainties in the boundary conditions can induce large offsets in the national total emissions. However, spatial emission patterns are less sensitive to the choice of boundary condition. Furthermore and in order to demonstrate the

  17. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  18. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  19. Dark matter in a constrained E6 inspired SUSY model

    International Nuclear Information System (INIS)

    Athron, P.; Harries, D.; Nevzorov, R.; Williams, A.G.

    2016-01-01

    We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.

  20. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  1. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  2. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  3. Natural gas fugitive emissions rates constrained by global atmospheric methane and ethane.

    Science.gov (United States)

    Schwietzke, Stefan; Griffin, W Michael; Matthews, H Scott; Bruhwiler, Lori M P

    2014-07-15

    The amount of methane emissions released by the natural gas (NG) industry is a critical and uncertain value for various industry and policy decisions, such as for determining the climate implications of using NG over coal. Previous studies have estimated fugitive emissions rates (FER)--the fraction of produced NG (mainly methane and ethane) escaped to the atmosphere--between 1 and 9%. Most of these studies rely on few and outdated measurements, and some may represent only temporal/regional NG industry snapshots. This study estimates NG industry representative FER using global atmospheric methane and ethane measurements over three decades, and literature ranges of (i) tracer gas atmospheric lifetimes, (ii) non-NG source estimates, and (iii) fossil fuel fugitive gas hydrocarbon compositions. The modeling suggests an upper bound global average FER of 5% during 2006-2011, and a most likely FER of 2-4% since 2000, trending downward. These results do not account for highly uncertain natural hydrocarbon seepage, which could lower the FER. Further emissions reductions by the NG industry may be needed to ensure climate benefits over coal during the next few decades.

  4. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  5. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  6. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  7. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  8. Land-use and land-cover change carbon emissions between 1901 and 2012 constrained by biomass observations

    Science.gov (United States)

    Wei Li; Philippe Ciais; Shushi Peng; Chao Yue; Yilong Wang; Martin Thurner; Sassan S. Saatchi; Almut Arneth; Valerio Avitabile; Nuno Carvalhais; Anna B. Harper; Etsushi Kato; Charles Koven; Yi Y. Liu; Julia E. M. S. Nabel; Yude Pan; Julia Pongratz; Benjamin Poulter; Thomas A. M. Pugh; Maurizio Santoro; Stephen Sitch; Benjamin D. Stocker; Nicolas Viovy; Andy Wiltshire; Rasoul Yousefpour; Sönke Zaehle

    2017-01-01

    The use of dynamic global vegetation models (DGVMs) to estimate CO2 emissions from land-use and land-cover change (LULCC) offers a new window to account for spatial and temporal details of emissions and for ecosystem processes affected by LULCC. One drawback of LULCC emissions from DGVMs, however, is lack of observation constraint. Here, we...

  9. CONSTRAINING GAMMA-RAY BURST EMISSION PHYSICS WITH EXTENSIVE EARLY-TIME, MULTIBAND FOLLOW-UP

    International Nuclear Information System (INIS)

    Cucchiara, A.; Cenko, S. B.; Bloom, J. S.; Morgan, A.; Perley, D. A.; Li, W.; Butler, N. R.; Filippenko, A. V.; Melandri, A.; Kobayashi, S.; Smith, R. J.; Mundell, C. G.; Steele, I. A.; Hora, J. L.; Da Silva, R. L.; Prochaska, J. X.; Worseck, G.; Fumagalli, M.; Milne, P. A.; Cobb, B.

    2011-01-01

    Understanding the origin and diversity of emission processes responsible for gamma-ray bursts (GRBs) remains a pressing challenge. While prompt and contemporaneous panchromatic observations have the potential to test predictions of the internal-external shock model, extensive multiband imaging has been conducted for only a few GRBs. We present rich, early-time, multiband data sets for two Swift events, GRB 110205A and GRB 110213A. The former shows optical emission since the early stages of the prompt phase, followed by the steep rising in flux up to ∼1000 s after the burst (t –α with α = –6.13 ± 0.75). We discuss this feature in the context of the reverse-shock scenario and interpret the following single power-law decay as being forward-shock dominated. Polarization measurements, obtained with the RINGO2 instrument mounted on the Liverpool Telescope, also provide hints on the nature of the emitting ejecta. The latter event, instead, displays a very peculiar optical to near-infrared light curve, with two achromatic peaks. In this case, while the first peak is probably due to the onset of the afterglow, we interpret the second peak to be produced by newly injected material, signifying a late-time activity of the central engine.

  10. CONSTRAINING GAMMA-RAY BURST EMISSION PHYSICS WITH EXTENSIVE EARLY-TIME, MULTIBAND FOLLOW-UP

    Energy Technology Data Exchange (ETDEWEB)

    Cucchiara, A.; Cenko, S. B.; Bloom, J. S.; Morgan, A.; Perley, D. A.; Li, W.; Butler, N. R.; Filippenko, A. V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Melandri, A. [INAF, Osservatorio Astronomicodi Brera, via E. Bianchi 46, I-23807 Merate (Saint Lucia) (Italy); Kobayashi, S.; Smith, R. J.; Mundell, C. G.; Steele, I. A. [Astrophysics Research Institute, Liverpool John Moores University, Twelve Quays House, Egerton Wharf, Birkenhead, CH41 1LD (United Kingdom); Hora, J. L. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Da Silva, R. L.; Prochaska, J. X.; Worseck, G.; Fumagalli, M. [Department of Astronomy and Astrophysics, UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Milne, P. A. [Steward Observatory, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85719 (United States); Cobb, B., E-mail: acucchia@ucolick.org [Department of Physics, George Washington University, Corcoran 105, 725 21st St, NW, Washington, DC 20052 (United States); and others

    2011-12-20

    Understanding the origin and diversity of emission processes responsible for gamma-ray bursts (GRBs) remains a pressing challenge. While prompt and contemporaneous panchromatic observations have the potential to test predictions of the internal-external shock model, extensive multiband imaging has been conducted for only a few GRBs. We present rich, early-time, multiband data sets for two Swift events, GRB 110205A and GRB 110213A. The former shows optical emission since the early stages of the prompt phase, followed by the steep rising in flux up to {approx}1000 s after the burst (t{sup -{alpha}} with {alpha} = -6.13 {+-} 0.75). We discuss this feature in the context of the reverse-shock scenario and interpret the following single power-law decay as being forward-shock dominated. Polarization measurements, obtained with the RINGO2 instrument mounted on the Liverpool Telescope, also provide hints on the nature of the emitting ejecta. The latter event, instead, displays a very peculiar optical to near-infrared light curve, with two achromatic peaks. In this case, while the first peak is probably due to the onset of the afterglow, we interpret the second peak to be produced by newly injected material, signifying a late-time activity of the central engine.

  11. CONSTRAINING VERY HIGH MASS POPULATION III STARS THROUGH He II EMISSION IN GALAXY BDF-521 AT z = 7.01

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Zheng; Fan, Xiaohui; Davé, Romeel; Zabludoff, Ann [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Jiang, Linhua [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); Oh, S. Peng [Department of Physics, University of California, Broida Hall, Santa Barbara, CA 93106-9530 (United States); Yang, Yujin, E-mail: caiz@email.arizona.edu [Argelander-Institut fuer Astronomie, Auf dem Huegel 71, D-53121 Bonn (Germany)

    2015-01-30

    Numerous theoretical models have long proposed that a strong He II λ1640 emission line is the most prominent and unique feature of massive Population III (Pop III) stars in high-redshift galaxies. The He II λ1640 line strength can constrain the mass and initial mass function (IMF) of Pop III stars. We use F132N narrowband filter on the Hubble Space Telescope's (HST) Wide Field Camera 3 to look for strong He II λ1640 emission in the galaxy BDF-521 at z = 7.01, one of the most distant spectroscopically confirmed galaxies to date. Using deep F132N narrowband imaging, together with our broadband imaging with F125W and F160W filters, we do not detect He II emission from this galaxy, but place a 2σ upper limit on the flux of 5.3×10{sup −19} erg s{sup −1} cm{sup −2}. This measurement corresponds to a 2σ upper limit on the Pop III star formation rate (SFR{sub PopIII}) of ∼0.2 M {sub ☉} yr{sup –1}, assuming a Salpeter IMF with 50 ≲ M/M {sub ☉} ≲ 1000. From the high signal-to-noise broadband measurements in F125W and F160W, we fit the UV continuum for BDF-521. The spectral flux density is ∼3.6×10{sup −11}×λ{sup −2.32} erg s{sup −1} cm{sup −2} Å{sup –1}, which corresponds to an overall unobscured SFR of ∼5 M {sub ☉} yr{sup –1}. Our upper limit on SFR{sub PopIII} suggests that massive Pop III stars represent ≲ 4% of the total star formation. Further, the HST high-resolution imaging suggests that BDF-521 is an extremely compact galaxy, with a half-light radius of 0.6 kpc.

  12. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  13. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  14. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  15. Chance-constrained/stochastic linear programming model for acid rain abatement. I. Complete colinearity and noncolinearity

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J H; McBean, E A; Farquhar, G J

    1985-01-01

    A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO/sub 2/ emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty, Chance-Constrained Programming and Stochastic Linear Programming. A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.

  16. Emissions Models and Other Methods to Produce Emission Inventories

    Science.gov (United States)

    An emissions inventory is a summary or forecast of the emissions produced by a group of sources in a given time period. Inventories of air pollution from mobile sources are often produced by models such as the MOtor Vehicle Emission Simulator (MOVES).

  17. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  18. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  19. Government control or low carbon lifestyle? – Analysis and application of a novel selective-constrained energy-saving and emission-reduction dynamic evolution system

    International Nuclear Information System (INIS)

    Fang, Guochang; Tian, Lixin; Fu, Min; Sun, Mei

    2014-01-01

    This paper explores a novel selective-constrained energy-saving and emission-reduction (ESER) dynamic evolution system, analyzing the impact of cost of conserved energy (CCE), government control, low carbon lifestyle and investment in new technology of ESER on energy intensity and economic growth. Based on artificial neural network, the quantitative coefficients of the actual system are identified. Taking the real situation in China for instance, an empirical study is undertaken by adjusting the parameters of the actual system. The dynamic evolution behavior of energy intensity and economic growth in reality are observed, with the results in perfect agreement with actual situation. The research shows that the introduction of CCE into ESER system will have certain restrictive effect on energy intensity in the earlier period. However, with the further development of the actual system, carbon emissions could be better controlled and energy intensity would decline. In the long run, the impacts of CCE on economic growth are positive. Government control and low carbon lifestyle play a decisive role in controlling ESER system and declining energy intensity. But the influence of government control on economic growth should be considered at the same time and the controlling effect of low carbon lifestyle on energy intensity should be strengthened gradually, while the investment in new technology of ESER can be neglected. Two different cases of ESER are proposed after a comprehensive analysis. The relations between variables and constraint conditions in the ESER system are harmonized remarkably. A better solution to carry out ESER is put forward at last, with numerical simulations being carried out to demonstrate the results. - Highlights: • Use of nonlinear dynamical method to model the selective-constrained ESER system. • Monotonic evolution curves of energy intensity and economic growth are obtained. • Detailed analysis of the game between government control and low

  20. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  1. Constraining Microwave Emission from Extensive Air Showers via the MIDAS Experiment

    Science.gov (United States)

    Richardson, Matthew; Privitera, Paolo

    2017-01-01

    Ultra high energy cosmic rays (UHECRs) are accelerated by the most energetic processes in the universe. Upon entering Earth’s atmosphere they produce particle showers known as extensive air showers (EASs). Observatories like the Pierre Auger Observatory sample the particles and light produced by the EASs through large particle detector arrays or nitrogen fluorescence detectors to ascertain the fundamental properties of UHECRs. The large sample of high quality data provided by the Pierre Auger Observatory can be attributed to the hybrid technique which utilizes the two aforementioned techniques simultaneously; however, the limitation of only being able to observe nitrogen fluorescence from EASs on clear moonless nights yields a limited 10% duty cycle for the hybrid technique. One proposal for providing high quality data at increased statistics is the observation of isotropic microwave emission from EASs, as such emission would be observed with a 100% duty cycle. Measurements of microwave emission from laboratory air plasmas conducted by Gorham et al. (2008) produced promising results indicating that the microwave emission should be observable using inexpensive detectors. The Microwave Detection of Air Showers (MIDAS) experiment was built at the University of Chicago to characterize the isotropic microwave emission from EASs and has collected 359 days of observational data at the location of the Pierre Auger experiment. We have performed a time coincidence analysis between this data and data from Pierre Auger and we report a null result. This result places stringent limits on microwave emission from EASs and demonstrates that the laboratory measurements of Gorham et al. (2008) are not applicable to EASs, thus diminishing the feasibility of using isotropic microwave emission to detect EASs.

  2. MOVES (MOTOR VEHICLE EMISSION SIMULATOR) MODEL ...

    Science.gov (United States)

    A computer model, intended to eventually replace the MOBILE model and to incorporate the NONROAD model, that will provide the ability to estimate criteria and toxic air pollutant emission factors and emission inventories that are specific to the areas and time periods of interest, at scales ranging from local to national. Development of a new emission factor and inventory model for mobile source emissions. The model will be used by air pollution modelers within EPA, and at the State and local levels.

  3. Top ten models constrained by b {yields} s{gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, J.L. [Stanford Univ., CA (United States)

    1994-12-01

    The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.

  4. Emission Constrained Multiple-Pulse Fuel Injection Optimisation and Control for Fuel-Efficient Diesel Engines

    NARCIS (Netherlands)

    Luo, X.; Jager, B. de; Willems, F.P.T.

    2015-01-01

    With the application of multiple-pulse fuel injection profiles, the performance of diesel engines is enhanced in terms of low fuel consumption and low engine-out emission levels. However, the calibration effort increases due to a larger number of injection timing parameters. The difficulty of

  5. Emission constrained multiple-pulse fuel injection optimisation and control for fuel-efficient diesel engines

    NARCIS (Netherlands)

    Luo, X.; Jager, de A.G.; Willems, F.P.T.

    2015-01-01

    With the application of multiple-pulse fuel injec- tion profiles, the performance of diesel engines is enhanced in terms of low fuel consumption and low engine-out emission levels. However, the calibration effort increases due to a larger number of injection timing parameters. The difficulty of

  6. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  7. A constrained rasch model of trace redintegration in serial recall.

    Science.gov (United States)

    Roodenrys, Steven; Miller, Leonie M

    2008-04-01

    The notion that verbal short-term memory tasks, such as serial recall, make use of information in long-term as well as in short-term memory is instantiated in many models of these tasks. Such models incorporate a process in which degraded traces retrieved from a short-term store are reconstructed, or redintegrated (Schweickert, 1993), through the use of information in long-term memory. This article presents a conceptual and mathematical model of this process based on a class of item-response theory models. It is demonstrated that this model provides a better fit to three sets of data than does the multinomial processing tree model of redintegration (Schweickert, 1993) and that a number of conceptual accounts of serial recall can be related to the parameters of the model.

  8. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  9. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  10. Constraining The Abundance Of Massive Black Hole Binaries By Spectroscopic Monitoring Of Quasars With Offset Broad Emission Lines

    Science.gov (United States)

    Liu, Xin; Shen, Y.

    2012-05-01

    A fraction of quasars have long been known to show significant bulk velocity offsets (of a few hundred to thousands of km/s) in the broad permitted emission lines with respect to host galaxy systemic redshift. Various scenarios may explain these features such as massive black hole binaries or broad line region gas kinematics. As previously demonstrated by the dedicated work of Eracleous and colleagues, long-term spectroscopic monitoring provides a promising test to discriminate between alternative scenarios. Here, we present a sample of 300 shifted-line quasars homogeneously selected from the SDSS DR7. For 60 of them, we have conducted second-epoch optical spectra using MMT/BCS, ARC 3.5m/DIS, and/or FLWO 1.5m/FAST. These new observations, combined with the existing SDSS spectra, enable us to constrain the velocity drifts of these shifted broad lines with time baselines of a few years up to a decade. Previous work has been focusing on objects with extreme velocity offsets: > 1000 km/s. Our work extends to the parameter space of smaller velocity offsets, where larger velocity drifts would be expected in the binary scenario. Our results may be used to identify strong candidates for and to constrain the abundance of massive black hole binaries, which are expected in the hierarchical universe, but have so far been illusive.

  11. A new multi-objective reserve constrained combined heat and power dynamic economic emission dispatch

    International Nuclear Information System (INIS)

    Niknam, Taher; Azizipanah-Abarghooee, Rasoul; Roosta, Alireza; Amiri, Babak

    2012-01-01

    Combined heat and power units are playing an ever increasing role in conventional power stations due to advantages such as reduced emissions and operational cost savings. This paper investigates a more practical formulation of the complex non-convex, non-smooth and non-linear multi-objective dynamic economic emission dispatch that incorporates combined heat and power units. Integrating these types of units, and their power ramp constraints, require an efficient tool to cope with the joint characteristics of power and heat. Unlike previous approaches, the spinning reserve requirements of this system are clearly formulated in the problem. In this way, a new multi-objective optimisation based on an enhanced firefly algorithm is proposed to achieve a set of non-dominated (Pareto-optimal) solutions. A new tuning parameter based on a chaotic mechanism and novel self adaptive probabilistic mutation strategies are used to improve the overall performance of the algorithm. The numerical results demonstrate how the proposed framework was applied in real time studies. -- Highlights: ► Investigate a practical formulation of the DEED (Dynamic Economic Emission Dispatch). ► Consider combined heat and power units. ► Consider power ramp constraints. ► Consider the system spinning reserve requirements. ► Present a new multi-objective optimization firefly.

  12. Constraining new physics with collider measurements of Standard Model signatures

    Energy Technology Data Exchange (ETDEWEB)

    Butterworth, Jonathan M. [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom); Grellscheid, David [IPPP, Department of Physics, Durham University,Durham, DH1 3LE (United Kingdom); Krämer, Michael; Sarrazin, Björn [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany); Yallup, David [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom)

    2017-03-14

    A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, ‘Constraints On New Theories Using Rivet’, CONTUR, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The CONTUR approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The CONTUR method is highly scaleable to other models and future measurements.

  13. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    International Nuclear Information System (INIS)

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-01-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We also synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms

  14. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...

  15. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  16. Constraining the High-Energy Emission from Gamma-Ray Bursts with Fermi

    Science.gov (United States)

    Gehrels, Neil; Harding, A. K.; Hays, E.; Racusin, J. L.; Sonbas, E.; Stamatikos, M.; Guirec, S.

    2012-01-01

    We examine 288 GRBs detected by the Fermi Gamma-ray Space Telescope's Gamma-ray Burst Monitor (GBM) that fell within the field-of-view of Fermi's Large Area Telescope (LAT) during the first 2.5 years of observations, which showed no evidence for emission above 100 MeV. We report the photon flux upper limits in the 0.1-10 GeV range during the prompt emission phase as well as for fixed 30 s and 100 s integrations starting from the trigger time for each burst. We compare these limits with the fluxes that would be expected from extrapolations of spectral fits presented in the first GBM spectral catalog and infer that roughly half of the GBM-detected bursts either require spectral breaks between the GBM and LAT energy bands or have intrinsically steeper spectra above the peak of the nuF(sub v) spectra (E(sub pk)). In order to distinguish between these two scenarios, we perform joint GBM and LAT spectral fits to the 30 brightest GBM-detected bursts and find that a majority of these bursts are indeed softer above E(sub pk) than would be inferred from fitting the GBM data alone. Approximately 20% of this spectroscopic subsample show statistically significant evidence for a cut-off in their high-energy spectra, which if assumed to be due to gamma gamma attenuation, places limits on the maximum Lorentz factor associated with the relativistic outflow producing this emission. All of these latter bursts have maximum Lorentz factor estimates that are well below the minimum Lorentz factors calculated for LAT-detected GRBs, revealing a wide distribution in the bulk Lorentz factor of GRB outflows and indicating that LAT-detected bursts may represent the high end of this distribution.

  17. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  18. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  19. Modeling Power-Constrained Optimal Backlight Dimming for Color Displays

    DEFF Research Database (Denmark)

    Burini, Nino; Nadernejad, Ehsan; Korhonen, Jari

    2013-01-01

    In this paper, we present a framework for modeling color liquid crystal displays (LCDs) having local light-emitting diode (LED) backlight with dimming capability. The proposed framework includes critical aspects like leakage, clipping, light diffusion and human perception of luminance and allows...

  20. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  1. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  2. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  3. Risk reserve constrained economic dispatch model with wind power penetration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, W.; Sun, H.; Peng, Y. [Department of Electrical and Electronics Engineering, Dalian University of Technology, Dalian, 116024 (China)

    2010-12-15

    This paper develops a modified economic dispatch (ED) optimization model with wind power penetration. Due to the uncertain nature of wind speed, both overestimation and underestimation of the available wind power are compensated using the up and down spinning reserves. In order to determine both of these two reserve demands, the risk-based up and down spinning reserve constraints are presented considering not only the uncertainty of available wind power, but also the load forecast error and generator outage rates. The predictor-corrector primal-dual interior point method is utilized to solve the proposed ED model. Simulation results of a system with ten conventional generators and one wind farm demonstrate the effectiveness of the proposed method. (authors)

  4. Planting Jatropha curcas on Constrained Land: Emission and Effects from Land Use Change

    Science.gov (United States)

    Firdaus, M. S.; Husni, M. H. A.

    2012-01-01

    A study was carried out to assess carbon emission and carbon loss caused from land use change (LUC) of converting a wasteland into a Jatropha curcas plantation. The study was conducted for 12 months at a newly established Jatropha curcas plantation in Port Dickson, Malaysia. Assessments of soil carbon dioxide (CO2) flux, changes of soil total carbon and plant biomass loss and growth were made on the wasteland and on the established plantation to determine the effects of land preparation (i.e., tilling) and removal of the wasteland's native vegetation. Overall soil CO2 flux showed no significant difference (P Jatropha curcas to recover the biomass carbon stock lost during land conversion. As far as the present study is concerned, converting wasteland to Jatropha curcas showed no adverse effects on the loss of carbon from soil and biomass and did not exacerbate soil respiration. PMID:22545018

  5. Constraining the neutrino emission of gravitationally lensed Flat-Spectrum Radio Quasars with ANTARES data

    Energy Technology Data Exchange (ETDEWEB)

    Adrián-Martínez, S.; Ardid, M.; Bou-Cabo, M. [Institut d' Investigació per a la Gestió Integrada de les Zones Costaneres (IGIC), Universitat Politècnica de València, C/ Paranimf 1, Gandia, 46730 Spain (Spain); Albert, A. [GRPHE - Institut universitaire de technologie de Colmar, 34 rue du Grillenbreit BP 50568, Colmar, 68008 France (France); André, M. [Technical University of Catalonia, Laboratory of Applied Bioacoustics, Rambla Exposició, Vilanova i la Geltrú, Barcelona, 08800 Spain (Spain); Anton, G. [Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, Erlangen, 91058 Germany (Germany); Aubert, J.-J.; Bertin, V.; Brunner, J.; Busto, J. [Aix Marseille Université, CNRS/IN2P3, CPPM UMR 7346, Marseille, 13288 France (France); Baret, B. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, Paris Cedex 13, F-75205 France (France); Barrios-Martí, J. [IFIC - Instituto de Física Corpuscular, Edificios Investigación de Paterna, CSIC - Universitat de València, Apdo de Correos 22085, Valencia, 46071 Spain (Spain); Basa, S. [LAM - Laboratoire d' Astrophysique de Marseille, Pôle de l' Étoile Site de Château-Gombert, rue Frédéric Joliot-Curie 38, Marseille Cedex 13, 13388 France (France); Biagi, S. [INFN - Sezione di Bologna, Viale Berti-Pichat 6/2, Bologna, 40127 Italy (Italy); Bogazzi, C.; Bormuth, R.; Bouwhuis, M.C.; Bruijn, R. [Nikhef, Science Park 105, Amsterdam, 1098XG The Netherlands (Netherlands); Capone, A. [INFN -Sezione di Roma, P.le Aldo Moro 2, Roma, 00185 Italy (Italy); Caramete, L., E-mail: antares.spokesperson@in2p3.fr [Institute for Space Sciences, Bucharest, Măgurele, R-77125 Romania (Romania); and others

    2014-11-01

    This paper proposes to exploit gravitational lensing effects to improve the sensitivity of neutrino telescopes to the intrinsic neutrino emission of distant blazar populations. This strategy is illustrated with a search for cosmic neutrinos in the direction of four distant and gravitationally lensed Flat-Spectrum Radio Quasars. The magnification factor is estimated for each system assuming a singular isothermal profile for the lens. Based on data collected from 2007 to 2012 by the ANTARES neutrino telescope, the strongest constraint is obtained from the lensed quasar B0218+357, providing a limit on the total neutrino luminosity of this source of 1.08× 10{sup 46} erg s{sup -1}. This limit is about one order of magnitude lower than those previously obtained in the ANTARES standard point source searches with non-lensed Flat-Spectrum Radio Quasars.

  6. Constraining quantum collapse inflationary models with CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Benetti, Micol; Alcaniz, Jailson S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro, RJ (Brazil); Landau, Susana J., E-mail: micolbenetti@on.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires and IFIBA, CONICET, Ciudad Universitaria, PabI, Buenos Aires 1428 (Argentina)

    2016-12-01

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze the possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.

  7. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  8. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  9. A Constrained Standard Model: Effects of Fayet-Iliopoulos Terms

    International Nuclear Information System (INIS)

    Barbieri, Riccardo; Hall, Lawrence J.; Nomura, Yasunori

    2001-01-01

    In (1)the one Higgs doublet standard model was obtained by an orbifold projection of a 5D supersymmetric theory in an essentially unique way, resulting in a prediction for the Higgs mass m H = 127 +- 8 GeV and for the compactification scale 1/R = 370 +- 70 GeV. The dominant one loop contribution to the Higgs potential was found to be finite, while the above uncertainties arose from quadratically divergent brane Z factors and from other higher loop contributions. In (3), a quadratically divergent Fayet-Iliopoulos term was found at one loop in this theory. We show that the resulting uncertainties in the predictions for the Higgs boson mass and the compactification scale are small, about 25percent of the uncertainties quoted above, and hence do not affect the original predictions. However, a tree level brane Fayet-Iliopoulos term could, if large enough, modify these predictions, especially for 1/R.

  10. Planting Jatropha curcas on Constrained Land: Emission and Effects from Land Use Change

    Directory of Open Access Journals (Sweden)

    M. S. Firdaus

    2012-01-01

    Full Text Available A study was carried out to assess carbon emission and carbon loss caused from land use change (LUC of converting a wasteland into a Jatropha curcas plantation. The study was conducted for 12 months at a newly established Jatropha curcas plantation in Port Dickson, Malaysia. Assessments of soil carbon dioxide (CO2 flux, changes of soil total carbon and plant biomass loss and growth were made on the wasteland and on the established plantation to determine the effects of land preparation (i.e., tilling and removal of the wasteland's native vegetation. Overall soil CO2 flux showed no significant difference (<0.05 between the two plots while no significant changes (<0.05 on soil total carbon at both plots were detected. It took 1.5 years for the growth of Jatropha curcas to recover the biomass carbon stock lost during land conversion. As far as the present study is concerned, converting wasteland to Jatropha curcas showed no adverse effects on the loss of carbon from soil and biomass and did not exacerbate soil respiration.

  11. Evaluation of green house gas emissions models.

    Science.gov (United States)

    2014-11-01

    The objective of the project is to evaluate the GHG emissions models used by transportation agencies and industry leaders. Factors in the vehicle : operating environment that may affect modal emissions, such as, external conditions, : vehicle fleet c...

  12. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  13. Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo

    2012-01-01

    Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat...

  14. Modeling Formaldehyde Emission in Comets

    Science.gov (United States)

    Disanti, M. A.; Reuter, D. C.; Bonev, B. P.; Mumma, M. J.; Villanueva, G. L.

    Modeling fluorescent emission from monomeric formaldehyde (H2CO) forms an integral part of our overall comprehensive program of measuring the volatile composition of comets through high-resolution (RP ~ 25,000) infrared spectroscopy using CSHELL at the IRTF and NIRSPEC at Keck II. The H2CO spectra contain lines from both the nu1 (symmetric CH2 stretch) and nu5 (asymmetric CH2 stretch) bands near 3.6 microns. We have acquired high-quality spectra of twelve Oort cloud comets, and at least six of these show clear emission from H2CO. We also detected H2CO with NIRSPEC in one Jupiter Family comet, 9P/Tempel 1, during Deep Impact observations. Our H2CO model, originally developed to interpret low-resolution spectra of comets Halley and Wilson (Reuter et al. 1989 Ap J 341:1045), predicts individual line intensities (g-factors) as a function of rotational temperature for approximately 1300 lines having energies up to approximately 400 cm^-1 above the ground state. Recently, it was validated through comparison with CSHELL spectra of C/2002 T7 (LINEAR), where newly developed analyses were applied to obtain robust determinations of both the rotational temperature and abundance of H2CO (DiSanti et al. 2006 Ap J 650:470). We are currently in the process of extending the model to higher rotational energy (i.e., higher rotational quantum number) in an attempt to improve the fit to high-J lines in our spectra of C/T7 and other comets. Results will be presented, and implications discussed.Modeling fluorescent emission from monomeric formaldehyde (H2CO) forms an integral part of our overall comprehensive program of measuring the volatile composition of comets through high-resolution (RP ~ 25,000) infrared spectroscopy using CSHELL at the IRTF and NIRSPEC at Keck II. The H2CO spectra contain lines from both the nu1 (symmetric CH2 stretch) and nu5 (asymmetric CH2 stretch) bands near 3.6 microns. We have acquired high-quality spectra of twelve Oort cloud comets, and at least six of

  15. Modelling carbon dioxide emissions from agricultural soils in Canada.

    Science.gov (United States)

    Yadav, Dhananjay; Wang, Junye

    2017-11-01

    Agricultural soils are a leading source of atmospheric greenhouse gas (GHG) emissions and are major contributors to global climate change. Carbon dioxide (CO 2 ) makes up 20% of the total GHG emitted from agricultural soil. Therefore, an evaluation of CO 2 emissions from agricultural soil is necessary in order to make mitigation strategies for environmental efficiency and economic planning possible. However, quantification of CO 2 emissions through experimental methods is constrained due to the large time and labour requirements for analysis. Therefore, a modelling approach is needed to achieve this objective. In this paper, the DeNitrification-DeComposition (DNDC), a process-based model, was modified to predict CO 2 emissions for Canada from regional conditions. The modified DNDC model was applied at three experimental sites in the province of Saskatchewan. The results indicate that the simulations of the modified DNDC model are in good agreement with observations. The agricultural management of fertilization and irrigation were evaluated using scenario analysis. The simulated total annual CO 2 flux changed on average by ±13% and ±1% following a ±50% variance of the total amount of N applied by fertilising and the total amount of water through irrigation applications, respectively. Therefore, careful management of irrigation and applications of fertiliser can help to reduce CO 2 emissions from the agricultural sector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  17. Methanol emissions from maize: Ontogenetic dependence to varying light conditions and guttation as an additional factor constraining the flux

    Science.gov (United States)

    Mozaffar, A.; Schoon, N.; Digrado, A.; Bachy, A.; Delaplace, P.; du Jardin, P.; Fauconnier, M.-L.; Aubinet, M.; Heinesch, B.; Amelynck, C.

    2017-03-01

    Because of its high abundance and long lifetime compared to other volatile organic compounds in the atmosphere, methanol (CH3OH) plays an important role in atmospheric chemistry. Even though agricultural crops are believed to be a large source of methanol, emission inventories from those crop ecosystems are still scarce and little information is available concerning the driving mechanisms for methanol production and emission at different developmental stages of the plants/leaves. This study focuses on methanol emissions from Zea mays L. (maize), which is vastly cultivated throughout the world. Flux measurements have been performed on young plants, almost fully grown leaves and fully grown leaves, enclosed in dynamic flow-through enclosures in a temperature and light-controlled environmental chamber. Strong differences in the response of methanol emissions to variations in PPFD (Photosynthetic Photon Flux Density) were noticed between the young plants, almost fully grown and fully grown leaves. Moreover, young maize plants showed strong emission peaks following light/dark transitions, for which guttation can be put forward as a hypothetical pathway. Young plants' average daily methanol fluxes exceeded by a factor of 17 those of almost fully grown and fully grown leaves when expressed per leaf area. Absolute flux values were found to be smaller than those reported in the literature, but in fair agreement with recent ecosystem scale flux measurements above a maize field of the same variety as used in this study. The flux measurements in the current study were used to evaluate the dynamic biogenic volatile organic compound (BVOC) emission model of Niinemets and Reichstein. The modelled and measured fluxes from almost fully grown leaves were found to agree best when a temperature and light dependent methanol production function was applied. However, this production function turned out not to be suitable for modelling the observed emissions from the young plants

  18. Criticisms and defences of the balance-of-payments constrained growth model: some old, some new

    Directory of Open Access Journals (Sweden)

    John S.L. McCombie

    2011-12-01

    Full Text Available This paper assesses various critiques that have been levelled over the years against Thirlwall’s Law and the balance-of-payments constrained growth model. It starts by assessing the criticisms that the law is largely capturing an identity; that the law of one price renders the model incoherent; and that statistical testing using cross-country data rejects the hypothesis that the actual and the balance-of-payments equilibrium growth rates are the same. It goes on to consider the argument that calculations of the “constant-market-shares” income elasticities of demand for exports demonstrate that the UK (and by implication other advanced countries could not have been balance-of-payments constrained in the early postwar period. Next Krugman’s interpretation of the law (or what he terms the “45-degree rule”, which is at variance with the usual demand-oriented explanation, is examined. The paper next assesses attempts to reconcile the demand and supply side of the model and examines whether or not the balance-of-payments constrained growth model is subject to the fallacy of composition. It concludes that none of these criticisms invalidate the model, which remains a powerful explanation of why growth rates differ.

  19. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  20. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  1. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    Science.gov (United States)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  2. Carbon dioxide observations at Cape Rama, India for the period 1993–2002: implications for constraining Indian emissions

    Digital Repository Service at National Institute of Oceanography (India)

    Tiwari, Y.K.; Patra, P.K.; Chevallier, F.; Francey, R.J.; Krummel, P.B.; Allison, C.E.; Revadekar, J.V.; Chakraborty, S.; Langenfelds, R.L.; Bhattacharya, S.K.; Borole, D.V.; RaviKumar, K.; Steele

    Steele 4 1 Centre for Climate Change Research, Indian Institute of Tropical Meteorology, Pune 411 008, India 2 Research Institute for Global Change, JAMSTEC, Yokohama, Japan 3 Laboratoire des Sciences du Climat et de l’Environnement, CEA... with high-resolution trans- port modelling will be required. 1. Boden, T., Marland, G. and Andres, R. J., National CO 2 emissions from fossil-fuel burning, cement manufacture, and gas flaring: 1951–2007. Carbon Dioxide Information Analysis Centre...

  3. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    International Nuclear Information System (INIS)

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5–4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data

  4. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  5. HEAVY-DUTY GREENHOUSE GAS EMISSIONS MODEL ...

    Science.gov (United States)

    Class 2b-8 vocational truck manufacturers and Class 7/8 tractor manufacturers would be subject to vehicle-based fuel economy and emission standards that would use a truck simulation model to evaluate the impact of the truck tires and/or tractor cab design on vehicle compliance with any new standards. The EPA has created a model called “GHG Emissions Model (GEM)”, which is specifically tailored to predict truck GHG emissions. As the model is designed for the express purpose of vehicle compliance demonstration, it is less configurable than similar commercial products and its only outputs are GHG emissions and fuel consumption. This approach gives a simple and compact tool for vehicle compliance without the overhead and costs of a more sophisticated model. Evaluation of both fuel consumption and CO2 emissions from heavy-duty highway vehicles through a whole-vehicle operation simulation model.

  6. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  7. A 3D model of polarized dust emission in the Milky Way

    Science.gov (United States)

    Martínez-Solaeche, Ginés; Karakci, Ata; Delabrouille, Jacques

    2018-05-01

    We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations.

  8. Modeling greenhouse gas emissions from dairy farms.

    Science.gov (United States)

    Rotz, C Alan

    2017-11-15

    Dairy farms have been identified as an important source of greenhouse gas emissions. Within the farm, important emissions include enteric CH 4 from the animals, CH 4 and N 2 O from manure in housing facilities during long-term storage and during field application, and N 2 O from nitrification and denitrification processes in the soil used to produce feed crops and pasture. Models using a wide range in level of detail have been developed to represent or predict these emissions. They include constant emission factors, variable process-related emission factors, empirical or statistical models, mechanistic process simulations, and life cycle assessment. To fully represent farm emissions, models representing the various emission sources must be integrated to capture the combined effects and interactions of all important components. Farm models have been developed using relationships across the full scale of detail, from constant emission factors to detailed mechanistic simulations. Simpler models, based upon emission factors and empirical relationships, tend to provide better tools for decision support, whereas more complex farm simulations provide better tools for research and education. To look beyond the farm boundaries, life cycle assessment provides an environmental accounting tool for quantifying and evaluating emissions over the full cycle, from producing the resources used on the farm through processing, distribution, consumption, and waste handling of the milk and dairy products produced. Models are useful for improving our understanding of farm processes and their interacting effects on greenhouse gas emissions. Through better understanding, they assist in the development and evaluation of mitigation strategies for reducing emissions and improving overall sustainability of dairy farms. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article

  9. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    International Nuclear Information System (INIS)

    Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.

    2005-01-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  10. Network-constrained Cournot models of liberalized electricity markets. The devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Department of Applied Economics, Sidgwick Ave., University of Cambridge, CB3 9DE (United Kingdom); Barquin, Julian; Vazquez, Miguel [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, c/Santa Cruz de Marcenado 26-28015 Madrid (Spain); Boots, Maroeska G. [Energy Research Centre of the Netherlands ECN, Badhuisweg 3, 1031 CM Amsterdam (Netherlands); Ehrenmann, Andreas [Judge Institute of Management, University of Cambridge, Trumpington Street, CB2 1AG (United Kingdom); Hobbs, Benjamin F. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Rijkers, Fieke A.M. [Contributed while at ECN, now at Nederlandse Mededingingsautoriteit (NMa), Dte, Postbus 16326, 2500 BH Den Haag (Netherlands)

    2005-05-15

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model.

  11. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Cambridge Univ., Dept. of Applied Economics, Cambridge (United Kingdom); Barquin, Julian; Vazquez, Miguel [Universidad Pontificia Comillas, Inst. de Investigacion Tecnologica, Madrid (Spain); Boots, Maroeska; Rijkers, Fieke A.M. [Energy Research Centre of the Netherlands ECN, Amsterdam (Netherlands); Ehrenmann, Andreas [Cambridge Univ., Judge Inst. of Management, Cambridge (United Kingdom); Hobbs, Benjamin F. [Johns Hopkins Univ., Dept. of Geography and Environmental Engineering, Baltimore, MD (United States)

    2005-05-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  12. 3Es System Optimization under Uncertainty Using Hybrid Intelligent Algorithm: A Fuzzy Chance-Constrained Programming Model

    Directory of Open Access Journals (Sweden)

    Jiekun Song

    2016-01-01

    Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.

  13. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    Science.gov (United States)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  14. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  15. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun; Belabbes, Abderrezak; Manchon, Aurelien

    2017-01-01

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  16. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun

    2017-10-29

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  17. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  18. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  19. Kovacs effect and fluctuation-dissipation relations in 1D kinetically constrained models

    International Nuclear Information System (INIS)

    Buhot, Arnaud

    2003-01-01

    Strong and fragile glass relaxation behaviours are obtained simply changing the constraints of the kinetically constrained Ising chain from symmetric to purely asymmetric. We study the out-of-equilibrium dynamics of these two models focusing on the Kovacs effect and the fluctuation-dissipation (FD) relations. The Kovacs or memory effect, commonly observed in structural glasses, is present for both constraints but enhanced with the asymmetric ones. Most surprisingly, the related FD relations satisfy the FD theorem in both cases. This result strongly differs from the simple quenching procedure where the asymmetric model presents strong deviations from the FD theorem

  20. A cost-constrained model of strategic service quality emphasis in nursing homes.

    Science.gov (United States)

    Davis, M A; Provan, K G

    1996-02-01

    This study employed structural equation modeling to test the relationship between three aspects of the environmental context of nursing homes; Medicaid dependence, ownership status, and market demand, and two basic strategic orientations: low cost and differentiation based on service quality emphasis. Hypotheses were proposed and tested against data collected from a sample of nursing homes operating in a single state. Because of the overwhelming importance of cost control in the nursing home industry, a cost constrained strategy perspective was supported. Specifically, while the three contextual variables had no direct effect on service quality emphasis, the entire model was supported when cost control orientation was introduced as a mediating variable.

  1. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  2. Modeling Greenhouse Gas Emissions from Enteric Fermentation

    NARCIS (Netherlands)

    Kebreab, E.; Tedeschi, L.; Dijkstra, J.; Ellis, J.L.; Bannink, A.; France, J.

    2016-01-01

    Livestock directly contribute to greenhouse gas (GHG) emissions mainly through methane (CH4) and nitrous oxide (N2O) emissions. For cost and practicality reasons, quantification of GHG has been through development of various types of mathematical models. This chapter addresses the utility and

  3. Simplifiying global biogeochemistry models to evaluate methane emissions

    Science.gov (United States)

    Gerber, S.; Alonso-Contes, C.

    2017-12-01

    Process-based models are important tools to quantify wetland methane emissions, particularly also under climate change scenarios, evaluating these models is often cumbersome as they are embedded in larger land-surface models where fluctuating water table and the carbon cycle (including new readily decomposable plant material) are predicted variables. Here, we build on these large scale models but instead of modeling water table and plant productivity we provide values as boundary conditions. In contrast, aerobic and anaerobic decomposition, as well as soil column transport of oxygen and methane are predicted by the model. Because of these simplifications, the model has the potential to be more readily adaptable to the analysis of field-scale data. Here we determine the sensitivity of the model to specific setups, parameter choices, and to boundary conditions in order to determine set-up needs and inform what critical auxiliary variables need to be measured in order to better predict field-scale methane emissions from wetland soils. To that end we performed a global sensitivity analysis that also considers non-linear interactions between processes. The global sensitivity analysis revealed, not surprisingly, that water table dynamics (both mean level and amplitude of fluctuations), and the rate of the carbon cycle (i.e. net primary productivity) are critical determinants of methane emissions. The depth-scale where most of the potential decomposition occurs also affects methane emissions. Different transport mechanisms are compensating each other to some degree: If plant conduits are constrained, methane emissions by diffusive flux and ebullition compensate to some degree, however annual emissions are higher when plants help to bypass methanotrophs in temporally unsaturated upper layers. Finally, while oxygen consumption by plant roots help creating anoxic conditions it has little effect on overall methane emission. Our initial sensitivity analysis helps guiding

  4. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    International Nuclear Information System (INIS)

    Xiao, Di; Dai, Zi-Gao

    2017-01-01

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  5. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Di; Dai, Zi-Gao, E-mail: dzg@nju.edu.cn [School of Astronomy and Space Science, Nanjing University, Nanjing 210093 (China)

    2017-09-10

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  6. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    Science.gov (United States)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  7. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    Science.gov (United States)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental

  8. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  9. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  10. Modelling carbon emissions in electric systems

    International Nuclear Information System (INIS)

    Lau, E.T.; Yang, Q.; Forbes, A.B.; Wright, P.; Livina, V.N.

    2014-01-01

    Highlights: • We model carbon emissions in electric systems. • We estimate emissions in generated and consumed energy with UK carbon factors. • We model demand profiles with novel function based on hyperbolic tangents. • We study datasets of UK Elexon database, Brunel PV system and Irish SmartGrid. • We apply Ensemble Kalman Filter to forecast energy data in these case studies. - Abstract: We model energy consumption of network electricity and compute Carbon emissions (CE) based on obtained energy data. We review various models of electricity consumption and propose an adaptive seasonal model based on the Hyperbolic tangent function (HTF). We incorporate HTF to define seasonal and daily trends of electricity demand. We then build a stochastic model that combines the trends and white noise component and the resulting simulations are estimated using Ensemble Kalman Filter (EnKF), which provides ensemble simulations of groups of electricity consumers; similarly, we estimate carbon emissions from electricity generators. Three case studies of electricity generation and consumption are modelled: Brunel University photovoltaic generation data, Elexon national electricity generation data (various fuel types) and Irish smart grid data, with ensemble estimations by EnKF and computation of carbon emissions. We show the flexibility of HTF-based functions for modelling realistic cycles of energy consumption, the efficiency of EnKF in ensemble estimation of energy consumption and generation, and report the obtained estimates of the carbon emissions in the considered case studies

  11. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  12. Modeling of greenhouse gas emission from livestock

    Directory of Open Access Journals (Sweden)

    Sanjo eJose

    2016-04-01

    Full Text Available The effects of climate change on humans and other living ecosystems is an area of on-going research. The ruminant livestock sector is considered to be one of the most significant contributors to the existing greenhouse gas (GHG pool. However the there are opportunities to combat climate change by reducing the emission of GHGs from ruminants. Methane (CH4 and nitrous oxide (N2O are emitted by ruminants via anaerobic digestion of organic matter in the rumen and manure, and by denitrification and nitrification processes which occur in manure. The quantification of these emissions by experimental methods is difficult and takes considerable time for analysis of the implications of the outputs from empirical studies, and for adaptation and mitigation strategies to be developed. To overcome these problems computer simulation models offer substantial scope for predicting GHG emissions. These models often include all farm activities while accurately predicting the GHG emissions including both direct as well as indirect sources. The models are fast and efficient in predicting emissions and provide valuable information on implementing the appropriate GHG mitigation strategies on farms. Further, these models help in testing the efficacy of various mitigation strategies that are employed to reduce GHG emissions. These models can be used to determine future adaptation and mitigation strategies, to reduce GHG emissions thereby combating livestock induced climate change.

  13. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  14. Inexact Multistage Stochastic Chance Constrained Programming Model for Water Resources Management under Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2017-01-01

    Full Text Available In order to formulate water allocation schemes under uncertainties in the water resources management systems, an inexact multistage stochastic chance constrained programming (IMSCCP model is proposed. The model integrates stochastic chance constrained programming, multistage stochastic programming, and inexact stochastic programming within a general optimization framework to handle the uncertainties occurring in both constraints and objective. These uncertainties are expressed as probability distributions, interval with multiply distributed stochastic boundaries, dynamic features of the long-term water allocation plans, and so on. Compared with the existing inexact multistage stochastic programming, the IMSCCP can be used to assess more system risks and handle more complicated uncertainties in water resources management systems. The IMSCCP model is applied to a hypothetical case study of water resources management. In order to construct an approximate solution for the model, a hybrid algorithm, which incorporates stochastic simulation, back propagation neural network, and genetic algorithm, is proposed. The results show that the optimal value represents the maximal net system benefit achieved with a given confidence level under chance constraints, and the solutions provide optimal water allocation schemes to multiple users over a multiperiod planning horizon.

  15. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  16. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  17. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  18. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  19. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  20. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    Science.gov (United States)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  1. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  2. Modeling biomass burning over the South, South East and East Asian Monsoon regions using a new, satellite constrained approach

    Science.gov (United States)

    Lan, R.; Cohen, J. B.

    2017-12-01

    Biomass burning over the South, South East and East Asian Monsoon regions, is a crucial contributor to the total local aerosol loading. Furthermore, the impact of the ITCZ, and Monsoonal circulation patterns coupled with complex topography also have a prominent impact on the aerosol loading throughout much of the Northern Hemisphere. However, at the present time, biomass burning emissions are highly underestimated over this region, in part due to under-reported emissions in space and time, and in part due to an incomplete understanding of the physics and chemistry of the aerosols emitted in fires and formed downwind from them. Hence, a better understanding of the four-dimensional source distribution, plume rise, and in-situ processing, in particular in regions with significant quantities of urban air pollutants, is essential to advance our knowledge of this problem. This work uses a new modeling methodology based on the simultaneous constraints of measured AOD and some trace gasses over the region. The results of the 4-D constrained emissions are further expanded upon using different fire plume height rise and in-situ processing assumptions. Comparisons between the results and additional ground-based and remotely sensed measurements, including AERONET, CALIOP, and NOAA and other ground networks are included. The end results reveal a trio of insights into the nonlinear processes most-important to understand the impacts of biomass burning in this part of the world. Model-measurement comparisons are found to be consistent during the typical burning years of 2016. First, the model performs better under the new emissions representations, than it does using any of the standard hotspot based approaches currently employed by the community. Second, long range transport and mixing between the boundary layer and free troposphere contribute to the spatial-temporal variations. Third, we indicate some source regions that are new, either because of increased urbanization, or of

  3. A supply function model for representing the strategic bidding of the producers in constrained electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Napoli, Roberto; Lu, Wene; Jiang, Xiuchen

    2010-01-01

    The modeling of the bidding behaviour of the producer is a key-point in the modeling and simulation of the competitive electricity markets. In our paper, the linear supply function model is applied so as to find the Supply Function Equilibrium analytically. It also proposed a new and efficient approach to find SFEs for the network constrained electricity markets by finding the best slope of the supply function with the help of changing the intercept, and the method can be applied on the large systems. The approach proposed is applied to study IEEE-118 bus test systems and the comparison between bidding slope and bidding intercept is presented, as well, with reference to the test system. (author)

  4. Chance-constrained programming models for capital budgeting with NPV as fuzzy parameters

    Science.gov (United States)

    Huang, Xiaoxia

    2007-01-01

    In an uncertain economic environment, experts' knowledge about outlays and cash inflows of available projects consists of much vagueness instead of randomness. Investment outlays and annual net cash flows of a project are usually predicted by using experts' knowledge. Fuzzy variables can overcome the difficulties in predicting these parameters. In this paper, capital budgeting problem with fuzzy investment outlays and fuzzy annual net cash flows is studied based on credibility measure. Net present value (NPV) method is employed, and two fuzzy chance-constrained programming models for capital budgeting problem are provided. A fuzzy simulation-based genetic algorithm is provided for solving the proposed model problems. Two numerical examples are also presented to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  5. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  6. The Balance-of-Payments-Constrained Growth Model and the Limits to Export-Led Growth

    Directory of Open Access Journals (Sweden)

    Robert A. Blecker

    2000-12-01

    Full Text Available This paper discusses how A. P. Thirlwall's model of balance-of-payments-constrained growth can be adapted to analyze the idea of a "fallacy of composition" in the export-led growth strategy of many developing countries. The Deaton-Muellbauer model of the Almost Ideal Demand System (AIDS is used to represent the adding-up constraints on individual countries' exports, when they are all trying to export competing products to the same foreign markets (i.e. newly industrializing countries are exporting similar types of manufactured goods to the OECD countries. The relevance of the model to the recent financial crises in developing countries and policy alternatives for redirecting development strategies are also discussed.

  7. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  8. Modeling emissions for three-dimensional atmospheric chemistry transport models.

    Science.gov (United States)

    Matthias, Volker; Arndt, Jan A; Aulinger, Armin; Bieser, Johannes; Denier Van Der Gon, Hugo; Kranenburg, Richard; Kuenen, Jeroen; Neumann, Daniel; Pouliot, George; Quante, Markus

    2018-01-24

    Poor air quality is still a threat for human health in many parts of the world. In order to assess measures for emission reductions and improved air quality, three-dimensional atmospheric chemistry transport modeling systems are used in numerous research institutions and public authorities. These models need accurate emission data in appropriate spatial and temporal resolution as input. This paper reviews the most widely used emission inventories on global and regional scale and looks into the methods used to make the inventory data model ready. Shortcomings of using standard temporal profiles for each emission sector are discussed and new methods to improve the spatio-temporal distribution of the emissions are presented. These methods are often neither top-down nor bottom-up approaches but can be seen as hybrid methods that use detailed information about the emission process to derive spatially varying temporal emission profiles. These profiles are subsequently used to distribute bulk emissions like national totals on appropriate grids. The wide area of natural emissions is also summarized and the calculation methods are described. Almost all types of natural emissions depend on meteorological information, which is why they are highly variable in time and space and frequently calculated within the chemistry transport models themselves. The paper closes with an outlook for new ways to improve model ready emission data, for example by using external databases about road traffic flow or satellite data to determine actual land use or leaf area. In a world where emission patterns change rapidly, it seems appropriate to use new types of statistical and observational data to create detailed emission data sets and keep emission inventories up-to-date. Emission data is probably the most important input for chemistry transport model (CTM) systems. It needs to be provided in high temporal and spatial resolution and on a grid that is in agreement with the CTM grid. Simple

  9. European initiatives for modeling emissions from transport

    DEFF Research Database (Denmark)

    Joumard, Robert; Hickman, A. John; Samaras, Zissis

    1998-01-01

    In Europe there have been many cooperative studies into transport emission inventories since the late 80s. These cover the scope of CORINAIR program involving experts from seven European Community laboratories addressing only road transport emissions at national level. These also include the latest...... covered are the composition of the vehicle fleets, emission factors, driving statistics and the modeling approach. Many of the European initiatives aim also at promoting further cooperation between national laboratories and at defining future research needs. An assessment of these future needs...... is presented from a European point of view....

  10. Structural model of the Northern Latium volcanic area constrained by MT, gravity and aeromagnetic data

    Directory of Open Access Journals (Sweden)

    P. Gasparini

    1997-06-01

    Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.

  11. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    Science.gov (United States)

    Dossett, Jason; Hu, Bin; Parkinson, David

    2014-03-01

    In order to explain cosmic acceleration without invoking ``dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B0, we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B0 by an order of magnitude, giving log10(B0) explanation.

  12. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Adaptively Constrained Stochastic Model Predictive Control for the Optimal Dispatch of Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaogang Guo

    2018-01-01

    Full Text Available In this paper, an adaptively constrained stochastic model predictive control (MPC is proposed to achieve less-conservative coordination between energy storage units and uncertain renewable energy sources (RESs in a microgrid (MG. Besides the economic objective of MG operation, the limits of state-of-charge (SOC and discharging/charging power of the energy storage unit are formulated as chance constraints when accommodating uncertainties of RESs, considering mild violations of these constraints are allowed during long-term operation, and a closed-loop online update strategy is performed to adaptively tighten or relax constraints according to the actual deviation probability of violation level from the desired one as well as the current change rate of deviation probability. Numerical studies show that the proposed adaptively constrained stochastic MPC for MG optimal operation is much less conservative compared with the scenario optimization based robust MPC, and also presents a better convergence performance to the desired constraint violation level than other online update strategies.

  14. Constraining the dark energy models with H (z ) data: An approach independent of H0

    Science.gov (United States)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  15. Influence of satellite-derived photolysis rates and NOx emissions on Texas ozone modeling

    Science.gov (United States)

    Tang, W.; Cohan, D. S.; Pour-Biazar, A.; Lamsal, L. N.; White, A. T.; Xiao, X.; Zhou, W.; Henderson, B. H.; Lash, B. F.

    2015-02-01

    Uncertain photolysis rates and emission inventory impair the accuracy of state-level ozone (O3) regulatory modeling. Past studies have separately used satellite-observed clouds to correct the model-predicted photolysis rates, or satellite-constrained top-down NOx emissions to identify and reduce uncertainties in bottom-up NOx emissions. However, the joint application of multiple satellite-derived model inputs to improve O3 state implementation plan (SIP) modeling has rarely been explored. In this study, Geostationary Operational Environmental Satellite (GOES) observations of clouds are applied to derive the photolysis rates, replacing those used in Texas SIP modeling. This changes modeled O3 concentrations by up to 80 ppb and improves O3 simulations by reducing modeled normalized mean bias (NMB) and normalized mean error (NME) by up to 0.1. A sector-based discrete Kalman filter (DKF) inversion approach is incorporated with the Comprehensive Air Quality Model with extensions (CAMx)-decoupled direct method (DDM) model to adjust Texas NOx emissions using a high-resolution Ozone Monitoring Instrument (OMI) NO2 product. The discrepancy between OMI and CAMx NO2 vertical column densities (VCDs) is further reduced by increasing modeled NOx lifetime and adding an artificial amount of NO2 in the upper troposphere. The region-based DKF inversion suggests increasing NOx emissions by 10-50% in most regions, deteriorating the model performance in predicting ground NO2 and O3, while the sector-based DKF inversion tends to scale down area and nonroad NOx emissions by 50%, leading to a 2-5 ppb decrease in ground 8 h O3 predictions. Model performance in simulating ground NO2 and O3 are improved using sector-based inversion-constrained NOx emissions, with 0.25 and 0.04 reductions in NMBs and 0.13 and 0.04 reductions in NMEs, respectively. Using both GOES-derived photolysis rates and OMI-constrained NOx emissions together reduces modeled NMB and NME by 0.05, increases the model

  16. Global terrestrial isoprene emission models: sensitivity to variability in climate and vegetation

    Directory of Open Access Journals (Sweden)

    A. Arneth

    2011-08-01

    Full Text Available Due to its effects on the atmospheric lifetime of methane, the burdens of tropospheric ozone and growth of secondary organic aerosol, isoprene is central among the biogenic compounds that need to be taken into account for assessment of anthropogenic air pollution-climate change interactions. Lack of process-understanding regarding leaf isoprene production as well as of suitable observations to constrain and evaluate regional or global simulation results add large uncertainties to past, present and future emissions estimates. Focusing on contemporary climate conditions, we compare three global isoprene models that differ in their representation of vegetation and isoprene emission algorithm. We specifically aim to investigate the between- and within model variation that is introduced by varying some of the models' main features, and to determine which spatial and/or temporal features are robust between models and different experimental set-ups. In their individual standard configurations, the models broadly agree with respect to the chief isoprene sources and emission seasonality, with maximum monthly emission rates around 20–25 Tg C, when averaged by 30-degree latitudinal bands. They also indicate relatively small (approximately 5 to 10 % around the mean interannual variability of total global emissions. The models are sensitive to changes in one or more of their main model components and drivers (e.g., underlying vegetation fields, climate input which can yield increases or decreases in total annual emissions of cumulatively by more than 30 %. Varying drivers also strongly alters the seasonal emission pattern. The variable response needs to be interpreted in view of the vegetation emission capacities, as well as diverging absolute and regional distribution of light, radiation and temperature, but the direction of the simulated emission changes was not as uniform as anticipated. Our results highlight the need for modellers to evaluate their

  17. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  18. CONSTRAINING THE LIFETIME AND OPENING ANGLE OF QUASARS USING FLUORESCENT Ly α EMISSION: THE CASE OF Q0420–388

    International Nuclear Information System (INIS)

    Borisova, Elena; Lilly, Simon J.; Cantalupo, Sebastiano; Prochaska, J. Xavier; Rakic, Olivera; Worseck, Gabor

    2016-01-01

    A toy model is developed to understand how the spatial distribution of fluorescent emitters in the vicinity of bright quasars could be affected by the geometry of the quasar bi-conical radiation field and by its lifetime. The model is then applied to the distribution of high-equivalent-width Ly α emitters (with rest-frame equivalent widths above 100 Å, threshold used in, e.g., Trainor and Steidel) identified in a deep narrow-band 36 × 36 arcmin 2 image centered on the luminous quasar Q0420–388. These emitters are found near the edge of the field and show some evidence of an azimuthal asymmetry on the sky of the type expected if the quasar is radiating in a bipolar cone. If these sources are being fluorescently illuminated by the quasar, the two most distant objects require a lifetime of at least 15 Myr for an opening angle of 60° or more, increasing to more than 40 Myr if the opening angle is reduced to a minimum of 30°. However, some other expected signatures of boosted fluorescence are not seen at the current survey limits, e.g., a fall off in Ly α brightness, or equivalent width, with distance. Furthermore, to have most of the Ly α emission of the two distant sources to be fluorescently boosted would require the quasar to have been significantly brighter in the past. This suggests that these particular sources may not be fluorescent, invalidating the above lifetime constraints. This would cast doubt on the use of this relatively low equivalent width threshold and thus also on the lifetime analysis in Trainor and Steidel.

  19. CONSTRAINING THE LIFETIME AND OPENING ANGLE OF QUASARS USING FLUORESCENT Ly α EMISSION: THE CASE OF Q0420–388

    Energy Technology Data Exchange (ETDEWEB)

    Borisova, Elena; Lilly, Simon J.; Cantalupo, Sebastiano [Institute for Astronomy, ETH Zurich, Zurich, CH-8093 (Switzerland); Prochaska, J. Xavier [UCO/Lick Observatory, UC Santa Cruz, Santa Cruz, CA 95064 (United States); Rakic, Olivera; Worseck, Gabor, E-mail: borisova@phys.ethz.ch [Max-Planck-Institut für Astronomie, Heidelberg, D-69117 (Germany)

    2016-10-20

    A toy model is developed to understand how the spatial distribution of fluorescent emitters in the vicinity of bright quasars could be affected by the geometry of the quasar bi-conical radiation field and by its lifetime. The model is then applied to the distribution of high-equivalent-width Ly α emitters (with rest-frame equivalent widths above 100 Å, threshold used in, e.g., Trainor and Steidel) identified in a deep narrow-band 36 × 36 arcmin{sup 2} image centered on the luminous quasar Q0420–388. These emitters are found near the edge of the field and show some evidence of an azimuthal asymmetry on the sky of the type expected if the quasar is radiating in a bipolar cone. If these sources are being fluorescently illuminated by the quasar, the two most distant objects require a lifetime of at least 15 Myr for an opening angle of 60° or more, increasing to more than 40 Myr if the opening angle is reduced to a minimum of 30°. However, some other expected signatures of boosted fluorescence are not seen at the current survey limits, e.g., a fall off in Ly α brightness, or equivalent width, with distance. Furthermore, to have most of the Ly α emission of the two distant sources to be fluorescently boosted would require the quasar to have been significantly brighter in the past. This suggests that these particular sources may not be fluorescent, invalidating the above lifetime constraints. This would cast doubt on the use of this relatively low equivalent width threshold and thus also on the lifetime analysis in Trainor and Steidel.

  20. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  1. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  2. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  3. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  4. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  5. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    International Nuclear Information System (INIS)

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-01

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model

  6. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, M. [Lockheed Martin Solar and Astrophysics Lab, Palo Alto, CA 94304 (United States); Manchester, W. B.; Van der Holst, B.; Sokolov, I.; Tóth, G.; Gombosi, T. I. [Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Mullinix, R. E.; Taktakishvili, A.; Chulaki, A., E-mail: jinmeng@lmsal.com, E-mail: chipm@umich.edu, E-mail: richard.e.mullinix@nasa.gov, E-mail: Aleksandre.Taktakishvili-1@nasa.gov [Community Coordinated Modeling Center, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2017-01-10

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).

  7. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  8. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  9. Constraining climate sensitivity and continental versus seafloor weathering using an inverse geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Catling, David C

    2017-05-22

    The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is  K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.

  10. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    Science.gov (United States)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction

  11. Characterising and modelling extended conducted electromagnetic emission

    CSIR Research Space (South Africa)

    Grobler, Inus

    2013-06-01

    Full Text Available , such as common mode and differential mode separation, calibrated with an EMC ETS-Lindgren current probe. Good and workable model accuracies were achieved with the basic Step-Up and Step-Down circuits over the conducted emission frequency band and beyond...

  12. CONSTRAINING THE GRB-MAGNETAR MODEL BY MEANS OF THE GALACTIC PULSAR POPULATION

    Energy Technology Data Exchange (ETDEWEB)

    Rea, N. [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam (Netherlands); Gullón, M.; Pons, J. A.; Miralles, J. A. [Departament de Fisica Aplicada, Universitat d’Alacant, Ap. Correus 99, E-03080 Alacant (Spain); Perna, R. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Dainotti, M. G. [Physics Department, Stanford University, Via Pueblo Mall 382, Stanford, CA (United States); Torres, D. F. [Instituto de Ciencias de l’Espacio (ICE, CSIC-IEEC), Campus UAB, Carrer Can Magrans s/n, E-08193 Barcelona (Spain)

    2015-11-10

    A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <10{sup 5} s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between 2005 January and 2014 August. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type or rate at z = 0). Direct consequences would be that we should allow the existence of magnetars and “super-magnetars” having different progenitors, and that Type Ib/c SNe related to Long GRBs form systematically neutron stars with higher initial magnetic fields. We put an upper limit of ≤16 “super-magnetars” formed by a GRB in our Galaxy in the past Myr (at 99% c.l.). This limit is somewhat smaller than what is roughly expected from Long GRB rates, although the very large uncertainties do not allow us to draw strong conclusion in this respect.

  13. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  14. Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial

    Directory of Open Access Journals (Sweden)

    A. Robinson

    2011-04-01

    Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ18O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.

  15. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  16. Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.

    Science.gov (United States)

    Starcevic, Vladan

    2017-06-01

    Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.

  17. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    Science.gov (United States)

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  19. Inverse modelling of national and European CH4 emissions using the atmospheric zoom model TM5

    Directory of Open Access Journals (Sweden)

    P. Bergamaschi

    2005-01-01

    Full Text Available A synthesis inversion based on the atmospheric zoom model TM5 is used to derive top-down estimates of CH4 emissions from individual European countries for the year 2001. We employ a model zoom over Europe with 1° × 1° resolution that is two-way nested into the global model domain (with resolution of 6° × 4°. This approach ensures consistent boundary conditions for the zoom domain and thus European top-down estimates consistent with global CH4 observations. The TM5 model, driven by ECMWF analyses, simulates synoptic scale events at most European and global sites fairly well, and the use of high-frequency observations allows exploiting the information content of individual synoptic events. A detailed source attribution is presented for a comprehensive set of 56 monitoring sites, assigning the atmospheric signal to the emissions of individual European countries and larger global regions. The available observational data put significant constraints on emissions from different regions. Within Europe, in particular several Western European countries are well constrained. The inversion results suggest up to 50-90% higher anthropogenic CH4 emissions in 2001 for Germany, France and UK compared to reported UNFCCC values (EEA, 2003. A recent revision of the German inventory, however, resulted in an increase of reported CH4 emissions by 68.5% (EEA, 2004, being now in very good agreement with our top-down estimate. The top-down estimate for Finland is distinctly smaller than the a priori estimate, suggesting much smaller CH4 emissions from Finnish wetlands than derived from the bottom-up inventory. The EU-15 totals are relatively close to UNFCCC values (within 4-30% and appear very robust for different inversion scenarios.

  20. Commitment Versus Persuasion in the Three-Party Constrained Voter Model

    Science.gov (United States)

    Mobilia, Mauro

    2013-04-01

    In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.

  1. Constrained structural dynamic model verification using free vehicle suspension testing methods

    Science.gov (United States)

    Blair, Mark A.; Vadlamudi, Nagarjuna

    1988-01-01

    Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.

  2. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    International Nuclear Information System (INIS)

    Dossett, Jason; Parkinson, David; Hu, Bin

    2014-01-01

    In order to explain cosmic acceleration without invoking ''dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B 0 , we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B 0 by an order of magnitude, giving log 10 (B 0 ) < −4.07 at 95% confidence limit. Finally, we test whether the effect of adding the lensing amplitude A Lens and the sum of the neutrino mass ∑m ν is able to reconcile current tensions present in these parameters, but find f(R) gravity an inadequate explanation

  3. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  4. Modelling emissions from natural gas flaring

    Directory of Open Access Journals (Sweden)

    G. Ezaina Umukoro

    2017-04-01

    Full Text Available The world today recognizes the significance of environmental sustainability to the development of nations. Hence, the role oil and gas industry plays in environmental degrading activities such as gas flaring is of global concern. This study presents material balance equations and predicts results for non-hydrocarbon emissions such as CO2, CO, NO, NO2, and SO2 etc. from flaring (combustion of 12 natural gas samples representing composition of natural gas of global origin. Gaseous emission estimates and pattern were modelled by coding material balance equations for six reaction types and combustion conditions with a computer program. On the average, anticipated gaseous emissions from flaring natural gas with an average annual global flaring rate 126 bcm per year (between 2000 and 2011 in million metric tonnes (mmt are 560 mmt, 48 mmt, 91 mmt, 93 mmt and 50 mmt for CO2, CO, NO, NO2 and SO2 respectively. This model predicted gaseous emissions based on the possible individual combustion types and conditions anticipated in gas flaring operation. It will assist in the effort by environmental agencies and all concerned to track and measure the extent of environmental pollution caused by gas flaring operations in the oil and gas industry.

  5. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    Science.gov (United States)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  6. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  7. Application of the emission inventory model TEAM: Uncertainties in dioxin emission estimates for central Europe

    NARCIS (Netherlands)

    Pulles, M.P.J.; Kok, H.; Quass, U.

    2006-01-01

    This study uses an improved emission inventory model to assess the uncertainties in emissions of dioxins and furans associated with both knowledge on the exact technologies and processes used, and with the uncertainties of both activity data and emission factors. The annual total emissions for the

  8. Constraining soil C cycling with strategic, adaptive action for data and model reporting

    Science.gov (United States)

    Harden, J. W.; Swanston, C.; Hugelius, G.

    2015-12-01

    Regional to global carbon assessments include a variety of models, data sets, and conceptual structures. This includes strategies for representing the role and capacity of soils to sequester, release, and store carbon. Traditionally, many soil carbon data sets emerged from agricultural missions focused on mapping and classifying soils to enhance and protect production of food and fiber. More recently, soil carbon assessments have allowed for more strategic measurement to address the functional and spatially explicit role that soils play in land-atmosphere carbon exchange. While soil data sets are increasingly inter-comparable and increasingly sampled to accommodate global assessments, soils remain poorly constrained or understood with regard to their role in spatio-temporal variations in carbon exchange. A more deliberate approach to rapid improvement in our understanding involves a community-based activity than embraces both a nimble data repository and a dynamic structure for prioritization. Data input and output can be transparent and retrievable as data-derived products, while also being subjected to rigorous queries for merging and harmonization into a searchable, comprehensive, transparent database. Meanwhile, adaptive action groups can prioritize data and modeling needs that emerge through workshops, meta-data analyses or model testing. Our continual renewal of priorities should address soil processes, mechanisms, and feedbacks that significantly influence global C budgets and/or significantly impact the needs and services of regional soil resources that are impacted by C management. In order to refine the International Soil Carbon Network, we welcome suggestions for such groups to be led on topics such as but not limited to manipulation experiments, extreme climate events, post-disaster C management, past climate-soil interactions, or water-soil-carbon linkages. We also welcome ideas for a business model that can foster and promote idea and data sharing.

  9. Modeling Polarized Emission from Black Hole Jets: Application to M87 Core Jet

    Directory of Open Access Journals (Sweden)

    Monika Mościbrodzka

    2017-09-01

    Full Text Available We combine three-dimensional general-relativistic numerical models of hot, magnetized Advection Dominated Accretion Flows around a supermassive black hole and the corresponding outflows from them with a general relativistic polarized radiative transfer model to produce synthetic radio images and spectra of jet outflows. We apply the model to the underluminous core of M87 galaxy. The assumptions and results of the calculations are discussed in context of millimeter observations of the M87 jet launching zone. Our ab initio polarized emission and rotation measure models allow us to address the constrains on the mass accretion rate onto the M87 supermassive black hole.

  10. A Kinematic Model of Slow Slip Constrained by Tremor-Derived Slip Histories in Cascadia

    Science.gov (United States)

    Schmidt, D. A.; Houston, H.

    2016-12-01

    We explore new ways to constrain the kinematic slip distributions for large slow slip events using constraints from tremor. Our goal is to prescribe one or more slip pulses that propagate across the fault and scale appropriately to satisfy the observations. Recent work (Houston, 2015) inferred a crude representative stress time history at an average point using the tidal stress history, the static stress drop, and the timing of the evolution of tidal sensitivity of tremor over several days of slip. To convert a stress time history into a slip time history, we use simulations to explore the stressing history of a small locked patch due to an approaching rupture front. We assume that the locked patch releases strain through a series of tremor bursts whose activity rate is related to the stressing history. To test whether the functional form of a slip pulse is reasonable, we assume a hypothetical slip time history (Ohnaka pulse) timed with the occurrence of tremor to create a rupture front that propagates along the fault. The duration of the rupture front for a fault patch is constrained by the observed tremor catalog for the 2010 ETS event. The slip amplitude is scaled appropriately to match the observed surface displacements from GPS. Through a forward simulation, we evaluate the ability of the tremor-derived slip history to accurately predict the pattern of surface displacements observed by GPS. We find that the temporal progression of surface displacements are well modeled by a 2-4 day slip pulse, suggesting that some of the longer duration of slip typically found in time-dependent GPS inversions is biased by the temporal smoothing. However, at some locations on the fault, the tremor lingers beyond the passage of the slip pulse. A small percentage (5-10%) of the tremor appears to be activated ahead of the approaching slip pulse, and tremor asperities experience a driving stress on the order of 10 kPa/day. Tremor amplitude, rather than just tremor counts, is needed

  11. Supporting the search for the CEP location with nonlocal PNJL models constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Contrera, Gustavo A. [IFLP, UNLP, CONICET, Facultad de Ciencias Exactas, La Plata (Argentina); Gravitation, Astrophysics and Cosmology Group, FCAyG, UNLP, La Plata (Argentina); CONICET, Buenos Aires (Argentina); Grunfeld, A.G. [CONICET, Buenos Aires (Argentina); Comision Nacional de Energia Atomica, Departamento de Fisica, Buenos Aires (Argentina); Blaschke, David [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Joint Institute for Nuclear Research, Moscow Region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation)

    2016-08-15

    We investigate the possible location of the critical endpoint in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift Δ P/T {sup 4} vs. T/T{sub c} with lattice QCD results shows a better agreement when Lorentzian form factors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study, albeit presently performed at the mean-field level, supports the very existence of a critical point and favors its location within a region that is accessible in experiments at the NICA accelerator complex. (orig.)

  12. CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand

    Directory of Open Access Journals (Sweden)

    Rajendra Shrestha

    2013-04-01

    Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.

  13. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.

    Science.gov (United States)

    Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R

    2016-11-16

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  14. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    Science.gov (United States)

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  16. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  17. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  18. A hydrodynamical model of Kepler's supernova remnant constrained by x-ray spectra

    International Nuclear Information System (INIS)

    Ballet, J.; Arnaud, M.; Rothinfluo, R.; Chieze, J.P.; Magne, B.

    1988-01-01

    The remnant of the historical supernova observed by Kepler in 1604 was recently observed in x-rays by the EXOSAT satellite up to 10 keV. A strong Fe K emission line around 6.5 keV is readily apparent in the spectrum. From an analysis of the light curve of the SN, reconstructed from historical descriptions, a previous study proposed to classify it as type I. Standard models of SN I based on carbon deflagration of white dwarf predict the synthesis of about 0.5 M circle of iron in the ejecta. Observing the iron line is a crucial check for such models. It has been argued that the light curve of Sn II-L is very similar to that of SN I and that the original observations are compatible with either type. In view of this uncertainty the authors have run a hydrodynamics-ionization code for both SN II and SN I remnants

  19. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....

  20. Constrained model predictive control for load-following operation of APR reactors

    International Nuclear Information System (INIS)

    Kim, Jae Hwan; Lee, Sim Won; Kim, Ju Hyun; Na, Man Gyun; Yu, Keuk Jong; Kim, Han Gon

    2012-01-01

    The load-following operation of APR+ reactor is needed to control the power effectively using the control rods and to restrain the reactivity control from using the boric acid for flexibility of plant operation. Usually, the reason why the disproportion of axial flux distribution occurs during load-following operation is xenon-induced oscillation. The xenon has a very high absorption cross-section and makes the impact on the reactor delayed by the iodine precursor. The power maneuvering using automatically load-following operation has advantage in terms of safety and economic operation of the reactor, so the controller has to be designed efficiently. Therefore, an advanced control method that meets the conditions such as automatic control, flexibility, safety, and convenience is necessary to load-following operation of APR+ reactor. In this paper, the constrained model predictive control (MPC) method is applied to design APR reactor's automatic load-following controller for the integrated thermal power level and axial shape index (ASI) control. Some controllers use only the current tracking command, but MPC considers future commands in addition to the current tracking command. So, MPC can achieve better tracking performance than others. Furthermore, an MPC is to used in many industrial process control systems. The basic concept of the MPC is to solve an optimization problem for a finite future time interval at present time and to implement the first optimal control input as the current control input. The KISPAC-1D code, which models the APR+ nuclear power plants, is interfaced to the proposed controller to verify the tracking performance of the reactor power level and ASI. It is known that the proposed controller exhibits very fast tracking responses

  1. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  2. Constraining groundwater flow model with geochemistry in the FUA and Cabril sites. Use in the ENRESA 2000 PA exercise

    International Nuclear Information System (INIS)

    Samper, J.; Carrera, J.; Bajos, C.; Astudillo, J.; Santiago, J.L.

    1999-01-01

    Hydrogeochemical activities have been a key factor for the verification and constraining of the groundwater flow model developed for the safety assessment of the FUA Uranium mill tailings restoration and the Cabril L/ILW disposal facility. The lesson learned in both sites will be applied to the ground water transport modelling in the current PA exercises (ENRESA 2000). The groundwater flow model in the Cabril site, represents a low permeability fractured media, and was performed using the TRANSIN code series developed by UPC-ENRESA. The hydrogeochemical data obtained from systematic yearly sampling and analysis campaigns were successfully applied to distinguish between local and regional flow and young and old groundwater. The salinity content, mainly the chlorine anion content, was the most critical hydrogeochemical data for constraining the groundwater flow model. (author)

  3. Constraining supersymmetric models using Higgs physics, precision observables and direct searches

    International Nuclear Information System (INIS)

    Zeune, Lisa

    2014-08-01

    We present various complementary possibilities to exploit experimental measurements in order to test and constrain supersymmetric (SUSY) models. Direct searches for SUSY particles have not resulted in any signal so far, and limits on the SUSY parameter space have been set. Measurements of the properties of the observed Higgs boson at ∝126 GeV as well as of the W boson mass (M W ) can provide valuable indirect constraints, supplementing the ones from direct searches. This thesis is divided into three major parts: In the first part we present the currently most precise prediction for M W in the Minimal Supersymmetric Standard Model (MSSM) with complex parameters and in the Next-to-Minimal Supersymmetric Standard Model (NMSSM). The evaluation includes the full one-loop result and all relevant available higher order corrections of Standard Model (SM) and SUSY type. We perform a detailed scan over the MSSM parameter space, taking into account the latest experimental results, including the observation of a Higgs signal. We find that the current measurements for M W and the top quark mass (m t ) slightly favour a non-zero SUSY contribution. The impact of different SUSY sectors on the prediction of M W as well as the size of the higher-order SUSY corrections are analysed both in the MSSM and the NMSSM. We investigate the genuine NMSSM contribution from the extended Higgs and neutralino sectors and highlight differences between the M W predictions in the two SUSY models. In the second part of the thesis we discuss possible interpretations of the observed Higgs signal in SUSY models. The properties of the observed Higgs boson are compatible with the SM so far, but many other interpretations are also possible. Performing scans over the relevant parts of the MSSM and the NMSSM parameter spaces and applying relevant constraints from Higgs searches, flavour physics and electroweak measurements, we find that a Higgs boson at ∝126 GeV, which decays into two photons, can in

  4. Time-constrained mother and expanding market: emerging model of under-nutrition in India

    Directory of Open Access Journals (Sweden)

    S. Chaturvedi

    2016-07-01

    Full Text Available Abstract Background Persistent high levels of under-nutrition in India despite economic growth continue to challenge political leadership and policy makers at the highest level. The present inductive enquiry was conducted to map the perceptions of mothers and other key stakeholders, to identify emerging drivers of childhood under-nutrition. Methods We conducted a multi-centric qualitative investigation in six empowered action group states of India. The study sample included 509 in-depth interviews with mothers of undernourished and normal nourished children, policy makers, district level managers, implementer and facilitators. Sixty six focus group discussions and 72 non-formal interactions were conducted in two rounds with primary caretakers of undernourished children, Anganwadi Workers and Auxiliary Nurse Midwives. Results Based on the perceptions of the mothers and other key stakeholders, a model evolved inductively showing core themes as drivers of under-nutrition. The most forceful emerging themes were: multitasking, time constrained mother with dwindling family support; fragile food security or seasonal food paucity; child targeted market with wide availability and consumption of ready-to-eat market food items; rising non-food expenditure, in the context of rising food prices; inadequate and inappropriate feeding; delayed recognition of under-nutrition and delayed care seeking; and inadequate responsiveness of health care system and Integrated Child Development Services (ICDS. The study emphasized that the persistence of child malnutrition in India is also tied closely to the high workload and consequent time constraint of mothers who are increasingly pursuing income generating activities and enrolled in paid labour force, without robust institutional support for childcare. Conclusion The emerging framework needs to be further tested through mixed and multiple method research approaches to quantify the contribution of time limitation of

  5. Evaluation of methane emissions from West Siberian wetlands based on inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H-S; Inoue, G [Research Institute for Humanity and Nature, 457-4 Motoyama, Kamigamo, Kita-ku, Kyoto 603-8047 (Japan); Maksyutov, S; Machida, T [National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 305-8506 (Japan); Glagolev, M V [Lomonosov Moscow State University, GSP-1, Leninskie Gory, Moscow 119991 (Russian Federation); Patra, P K [Research Institute for Global Change/JAMSTEC, 3173-25 Showa-cho, Kanazawa-ku, Yokohama, Kanagawa 236-0001 (Japan); Sudo, K, E-mail: heonsook.kim@gmail.com [Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan)

    2011-07-15

    West Siberia contains the largest extent of wetlands in the world, including large peat deposits; the wetland area is equivalent to 27% of the total area of West Siberia. This study used inverse modeling to refine emissions estimates for West Siberia using atmospheric CH{sub 4} observations and two wetland CH{sub 4} emissions inventories: (1) the global wetland emissions dataset of the NASA Goddard Institute for Space Studies (the GISS inventory), which includes emission seasons and emission rates based on climatology of monthly surface air temperature and precipitation, and (2) the West Siberian wetland emissions data (the Bc7 inventory), based on in situ flux measurements and a detailed wetland classification. The two inversions using the GISS and Bc7 inventories estimated annual mean flux from West Siberian wetlands to be 2.9 {+-} 1.7 and 3.0 {+-} 1.4 Tg yr{sup -1}, respectively, which are lower than the 6.3 Tg yr{sup -1} predicted in the GISS inventory, but similar to those of the Bc7 inventory (3.2 Tg yr{sup -1}). The well-constrained monthly fluxes and a comparison between the predicted CH{sub 4} concentrations in the two inversions suggest that the Bc7 inventory predicts the seasonal cycle of West Siberian wetland CH{sub 4} emissions more reasonably, indicating that the GISS inventory predicts more emissions from wetlands in northern and middle taiga.

  6. Glacial/interglacial wetland, biomass burning, and geologic methane emissions constrained by dual stable isotopic CH4 ice core records

    Science.gov (United States)

    Bock, Michael; Schmitt, Jochen; Beck, Jonas; Seth, Barbara; Chappellaz, Jérôme; Fischer, Hubertus

    2017-07-01

    Atmospheric methane (CH4) records reconstructed from polar ice cores represent an integrated view on processes predominantly taking place in the terrestrial biogeosphere. Here, we present dual stable isotopic methane records [δ13CH4 and δD(CH4)] from four Antarctic ice cores, which provide improved constraints on past changes in natural methane sources. Our isotope data show that tropical wetlands and seasonally inundated floodplains are most likely the controlling sources of atmospheric methane variations for the current and two older interglacials and their preceding glacial maxima. The changes in these sources are steered by variations in temperature, precipitation, and the water table as modulated by insolation, (local) sea level, and monsoon intensity. Based on our δD(CH4) constraint, it seems that geologic emissions of methane may play a steady but only minor role in atmospheric CH4 changes and that the glacial budget is not dominated by these sources. Superimposed on the glacial/interglacial variations is a marked difference in both isotope records, with systematically higher values during the last 25,000 y compared with older time periods. This shift cannot be explained by climatic changes. Rather, our isotopic methane budget points to a marked increase in fire activity, possibly caused by biome changes and accumulation of fuel related to the late Pleistocene megafauna extinction, which took place in the course of the last glacial.

  7. Robust and Efficient Constrained DFT Molecular Dynamics Approach for Biochemical Modeling

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Levy, B.; Demachy, I.; de la Lande, A.

    2012-01-01

    Roč. 8, č. 2 (2012), s. 418-427 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : constrained density functional the ory * electron transfer * density fitting Subject RIV: CF - Physical ; The oretical Chemistry Impact factor: 5.389, year: 2012

  8. Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population

    Science.gov (United States)

    Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.

    2012-01-01

    With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars

  9. GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake

    Science.gov (United States)

    Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.

    2011-10-01

    The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided

  10. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  11. Constraining the heat flux between Enceladus’ tiger stripes: numerical modeling of funiscular plains formation

    Science.gov (United States)

    Bland, Michael T.; McKinnon, William B; Schenk, Paul M.

    2015-01-01

    The Cassini spacecraft’s Composite Infrared Spectrometer (CIRS) has observed at least 5 GW of thermal emission at Enceladus’ south pole. The vast majority of this emission is localized on the four long, parallel, evenly-spaced fractures dubbed tiger stripes. However, the thermal emission from regions between the tiger stripes has not been determined. These spatially localized regions have a unique morphology consisting of short-wavelength (∼1 km) ridges and troughs with topographic amplitudes of ∼100 m, and a generally ropy appearance that has led to them being referred to as “funiscular terrain.” Previous analysis pursued the hypothesis that the funiscular terrain formed via thin-skinned folding, analogous to that occurring on a pahoehoe flow top (Barr, A.C., Preuss, L.J. [2010]. Icarus 208, 499–503). Here we use finite element modeling of lithospheric shortening to further explore this hypothesis. Our best-case simulations reproduce funiscular-like morphologies, although our simulated fold wavelengths after 10% shortening are 30% longer than those observed. Reproducing short-wavelength folds requires high effective surface temperatures (∼185 K), an ice lithosphere (or high-viscosity layer) with a low thermal conductivity (one-half to one-third that of intact ice or lower), and very high heat fluxes (perhaps as great as 400 mW m−2). These conditions are driven by the requirement that the high-viscosity layer remain extremely thin (≲200 m). Whereas the required conditions are extreme, they can be met if a layer of fine grained plume material 1–10 m thick, or a highly fractured ice layer >50 m thick insulates the surface, and the lithosphere is fractured throughout as well. The source of the necessary heat flux (a factor of two greater than previous estimates) is less obvious. We also present evidence for an unusual color/spectral character of the ropy terrain, possibly related to its unique surface texture. Our simulations demonstrate

  12. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  13. The air emissions risk assessment model (AERAM)

    International Nuclear Information System (INIS)

    Gratt, L.B.

    1991-01-01

    AERAM is an environmental analysis and power generation station investment decision support tool. AERAM calculates the public health risk (in terms of the lifetime cancers) in the nearby population from pollutants released into the air. AERAM consists of four main subroutines: Emissions, Air, Exposure and Risk. The Emission subroutine uses power plant parameters to calculate the expected release of the pollutants. A coal-fired and oil-fired power plant are currently available. A gas-fired plant model is under preparation. The release of the pollutants into the air is followed by their dispersal in the environment. The dispersion in the Air Subroutine uses the Environmental Protection Agency's model, Industrial Source Complex-Long Term. Additional dispersion models (Industrial Source Complex - Short Term and Cooling Tower Drift) are being implemented for future AERAM versions. The Expose Subroutine uses the ambient concentrations to compute population exposures for the pollutants of concern. The exposures are used with corresponding dose-response model in the Risk Subroutine to estimate both the total population risk and individual risk. The risk for the dispersion receptor-population centroid for the maximum concentration is also calculated for regulatory-population purposes. In addition, automated interfaces with AirTox (an air risk decision model) have been implemented to extend AERAM's steady-state single solution to the decision-under-uncertainty domain. AERAM was used for public health risks, the investment decision for additional pollution control systems based on health risk reductions, and the economics of fuel vs. health risk tradeoffs. AERAM provides that state-of-the-art capability for evaluating the public health impact airborne toxic substances in response to regulations and public concern

  14. Air Quality Modelling and the National Emission Database

    DEFF Research Database (Denmark)

    Jensen, S. S.

    The project focuses on development of institutional strengthening to be able to carry out national air emission inventories based on the CORINAIR methodology. The present report describes the link between emission inventories and air quality modelling to ensure that the new national air emission...... inventory is able to take into account the data requirements of air quality models...

  15. Modeling regional-scale wildland fire emissions with the wildland fire emissions information system

    Science.gov (United States)

    Nancy H.F. French; Donald McKenzie; Tyler Erickson; Benjamin Koziol; Michael Billmire; K. Endsley; Naomi K.Y. Scheinerman; Liza Jenkins; Mary E. Miller; Roger Ottmar; Susan Prichard

    2014-01-01

    As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent...

  16. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    Science.gov (United States)

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  17. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus

    2013-07-01

    In terrestrial biosphere models, key biochemical controls on carbon uptake by vegetation canopies are typically assigned fixed literature-based values for broad categories of vegetation types although in reality significant spatial and temporal variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based retrievals of leaf chlorophyll for estimating leaf photosynthetic capacity and for constraining model simulations of water and carbon fluxes. © 2013 IEEE.

  18. Constraining local 3-D models of the saturated-zone, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Barr, G.E.; Shannon, S.A.

    1994-01-01

    A qualitative three-dimensional analysis of the saturated zone flow system was performed for a 8 km x 8 km region including the potential Yucca Mountain repository site. Certain recognized geologic features of unknown hydraulic properties were introduced to assess the general response of the flow field to these features. Two of these features, the Solitario Canyon fault and the proposed fault in Drill Hole Wash, appear to constrain flow and allow calibration

  19. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ) Model-I: building an emissions data base

    Science.gov (United States)

    Smith, S. N.; Mueller, S. F.

    2010-05-01

    A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ) Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE) emissions processing system) currently estimates non-methane volatile organic compound (NMVOC) emissions from biogenic sources, nitrogen oxide (NOx) emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, windblown dust particulate, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide), 44% of total NOx, 80% of reactive carbonaceous gases (NMVOCs and carbon monoxide), 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride), and 84% of fine particles (i.e., those smaller than 2.5 μm in size) released into the atmosphere

  20. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ) model - Part 1: Building an emissions data base

    Science.gov (United States)

    Smith, S. N.; Mueller, S. F.

    2010-01-01

    A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ) Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE) emissions processing system) currently estimates volatile organic compound (VOC) emissions from biogenic sources, nitrogen oxide (NOx) emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as windblown dust and sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide), 44% of total NOx, 80% of reactive carbonaceous gases (VOCs and carbon monoxide), 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride), and 84% of fine particles (i.e., those smaller than 2.5 μm in size) released into the atmosphere. The seasonality and

  1. Modeling Spatial and Temporal Variability in Ammonia Emissions from Agricultural Fertilization

    Science.gov (United States)

    Balasubramanian, S.; Koloutsou-Vakakis, S.; Rood, M. J.

    2013-12-01

    Ammonia (NH3), is an important component of the reactive nitrogen cycle and a precursor to formation of atmospheric particulate matter (PM). Predicting regional PM concentrations and deposition of nitrogen species to ecosystems requires representative emission inventories. Emission inventories have traditionally been developed using top down approaches and more recently from data assimilation based on satellite and ground based ambient concentrations and wet deposition data. The National Emission Inventory (NEI) indicates agricultural fertilization as the predominant contributor (56%) to NH3 emissions in Midwest USA, in 2002. However, due to limited understanding of the complex interactions between fertilizer usage, farm practices, soil and meteorological conditions and absence of detailed statistical data, such emission estimates are currently based on generic emission factors, time-averaged temporal factors and coarse spatial resolution. Given the significance of this source, our study focuses on developing an improved NH3 emission inventory for agricultural fertilization at finer spatial and temporal scales for air quality modeling studies. Firstly, a high-spatial resolution 4 km x 4 km NH3 emission inventory for agricultural fertilization has been developed for Illinois by modifying spatial allocation of emissions based on combining crop-specific fertilization rates with cropland distribution in the Sparse Matrix Operator Kernel Emissions model. Net emission estimates of our method are within 2% of NEI, since both methods are constrained by fertilizer sales data. However, we identified localized crop-specific NH3 emission hotspots at sub-county resolutions absent in NEI. Secondly, we have adopted the use of the DeNitrification-DeComposition (DNDC) Biogeochemistry model to simulate the physical and chemical processes that control volatilization of nitrogen as NH3 to the atmosphere after fertilizer application and resolve the variability at the hourly scale

  2. Theoretical models of neutron emission in fission

    International Nuclear Information System (INIS)

    Madland, D.G.

    1992-01-01

    A brief survey of theoretical representations of two of the observables in neutron emission in fission is given, namely, the prompt fission neutron spectrum N(E) and the average prompt neutron multiplicity bar v p . Early representations of the two observables are presented and their deficiencies are discussed. This is followed by summaries and examples of recent theoretical models for the calculation of these quantities. Emphasis is placed upon the predictability and accuracy of the new models. In particular, the dependencies of N(E) and bar v p upon the fissioning nucleus and its excitation energy are treated. Recent work in the calculation of the prompt fission neutron spectrum matrix N(E,E n ), where E n is the energy of the neutron inducing fission, is then discussed. Concluding remarks address the current status of our ability to calculate these observables with confidence, the direction of future theoretical efforts, and limititations to current and future calculations. Finally, recommendations are presented as to which model should be used currently and which model should be pursued in future efforts

  3. On spontaneous photon emission in collapse models

    International Nuclear Information System (INIS)

    Adler, Stephen L; Bassi, Angelo; Donadi, Sandro

    2013-01-01

    We reanalyze the problem of spontaneous photon emission in collapse models. We show that the extra term found by Bassi and Dürr is present for non-white (colored) noise, but its coefficient is proportional to the zero frequency Fourier component of the noise. This leads one to suspect that the extra term is an artifact. When the calculation is repeated with the final electron in a wave packet and with the noise confined to a bounded region, the extra term vanishes in the limit of continuum state normalization. The result obtained by Fu and by Adler and Ramazanoğlu from application of the Golden Rule is then recovered. (paper)

  4. Nebular Continuum and Line Emission in Stellar Population Synthesis Models

    Energy Technology Data Exchange (ETDEWEB)

    Byler, Nell; Dalcanton, Julianne J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Conroy, Charlie; Johnson, Benjamin D., E-mail: ebyler@astro.washington.edu [Department of Astronomy, Harvard University, Cambridge, MA 02138 (United States)

    2017-05-01

    Accounting for nebular emission when modeling galaxy spectral energy distributions (SEDs) is important, as both line and continuum emissions can contribute significantly to the total observed flux. In this work, we present a new nebular emission model integrated within the Flexible Stellar Population Synthesis code that computes the line and continuum emission for complex stellar populations using the photoionization code Cloudy. The self-consistent coupling of the nebular emission to the matched ionizing spectrum produces emission line intensities that correctly scale with the stellar population as a function of age and metallicity. This more complete model of galaxy SEDs will improve estimates of global gas properties derived with diagnostic diagrams, star formation rates based on H α , and physical properties derived from broadband photometry. Our models agree well with results from other photoionization models and are able to reproduce observed emission from H ii regions and star-forming galaxies. Our models show improved agreement with the observed H ii regions in the Ne iii/O ii plane and show satisfactory agreement with He ii emission from z = 2 galaxies, when including rotating stellar models. Models including post-asymptotic giant branch stars are able to reproduce line ratios consistent with low-ionization emission regions. The models are integrated into current versions of FSPS and include self-consistent nebular emission predictions for MIST and Padova+Geneva evolutionary tracks.

  5. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  6. Modeling of Passive Constrained Layer Damping as Applied to a Gun Tube

    Directory of Open Access Journals (Sweden)

    Margaret Z. Kiehl

    2001-01-01

    Full Text Available We study the damping effects of a cantilever beam system consisting of a gun tube wrapped with a constrained viscoelastic polymer on terrain induced vibrations. A time domain solution to the forced motion of this system is developed using the GHM (Golla-Hughes-McTavish method to incorporate the viscoelastic properties of the polymer. An impulse load is applied at the free end and the tip deflection of the cantilevered beam system is determined. The resulting GHM equations are then solved in MATLAB by transformation to the state-space domain.

  7. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  8. Mathematical Model of the Emissions of a selected vehicle

    Directory of Open Access Journals (Sweden)

    Matušů Radim

    2014-10-01

    Full Text Available The article addresses the quantification of exhaust emissions from gasoline engines during transient operation. The main targeted emissions are carbon monoxide and carbon dioxide. The result is a mathematical model describing the production of individual emissions components in all modes (static and dynamic. It also describes the procedure for the determination of emissions from the engine’s operating parameters. The result is compared with other possible methods of measuring emissions. The methodology is validated using the data from an on-road measurement. The mathematical model was created on the first route and validated on the second route.

  9. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  10. A Geometrically-Constrained Mathematical Model of Mammary Gland Ductal Elongation Reveals Novel Cellular Dynamics within the Terminal End Bud.

    Directory of Open Access Journals (Sweden)

    Ingrid Paine

    2016-04-01

    Full Text Available Mathematics is often used to model biological systems. In mammary gland development, mathematical modeling has been limited to acinar and branching morphogenesis and breast cancer, without reference to normal duct formation. We present a model of ductal elongation that exploits the geometrically-constrained shape of the terminal end bud (TEB, the growing tip of the duct, and incorporates morphometrics, region-specific proliferation and apoptosis rates. Iterative model refinement and behavior analysis, compared with biological data, indicated that the traditional metric of nipple to the ductal front distance, or percent fat pad filled to evaluate ductal elongation rate can be misleading, as it disregards branching events that can reduce its magnitude. Further, model driven investigations of the fates of specific TEB cell types confirmed migration of cap cells into the body cell layer, but showed their subsequent preferential elimination by apoptosis, thus minimizing their contribution to the luminal lineage and the mature duct.

  11. Ozone response to emission changes: a modeling study during the MCMA-2006/MILAGRO Campaign

    Directory of Open Access Journals (Sweden)

    J. Song

    2010-04-01

    Full Text Available The sensitivity of ozone production to precursor emissions was investigated under five different meteorological conditions in the Mexico City Metropolitan Area (MCMA during the MCMA-2006/MILAGRO field campaign using the gridded photochemical model CAMx driven by observation-nudged WRF meteorology. Precursor emissions were constrained by the comprehensive data from the field campaign and the routine ambient air quality monitoring network. Simulated plume mixing and transport were examined by comparing with measurements from the G-1 aircraft during the campaign. The observed concentrations of ozone precursors and ozone were reasonably well reproduced by the model. The effects of reducing precursor emissions on urban ozone production were performed for three representative emission control scenarios. A 50% reduction in VOC emissions led to 7 to 22 ppb decrease in daily maximum ozone concentrations, while a 50% reduction in NOx emissions leads to 4 to 21 ppb increase, and 50% reductions in both NOx and VOC emission decrease the daily maximum ozone concentrations up to 10 ppb. These results along with a chemical indicator analysis using the chemical production ratios of H2O2 to HNO3 demonstrate that the MCMA urban core region is VOC-limited for all meteorological episodes, which is consistent with the results from MCMA-2003 field campaign; however the degree of the VOC-sensitivity is higher during MCMA-2006 due to lower VOCs, lower VOC reactivity and moderately higher NOx emissions. Ozone formation in the surrounding mountain/rural area is mostly NOx-limited, but can be VOC-limited, and the range of the NOx-limited or VOC-limited areas depends on meteorology.

  12. Evolution in totally constrained models: Schrödinger vs. Heisenberg pictures

    Science.gov (United States)

    Olmedo, Javier

    2016-06-01

    We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli’s evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this paper is to further analyze this approach at the quantum level. Concretely, we will compare the (Schrödinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this paper (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.

  13. Exploring the biological consequences of conformational changes in aspartame models containing constrained analogues of phenylalanine.

    Science.gov (United States)

    Mollica, Adriano; Mirzaie, Sako; Costante, Roberto; Carradori, Simone; Macedonio, Giorgia; Stefanucci, Azzurra; Dvoracsko, Szabolcs; Novellino, Ettore

    2016-12-01

    The dipeptide aspartame (Asp-Phe-OMe) is a sweetener widely used in replacement of sucrose by food industry. 2',6'-Dimethyltyrosine (DMT) and 2',6'-dimethylphenylalanine (DMP) are two synthetic phenylalanine-constrained analogues, with a limited freedom in χ-space due to the presence of methyl groups in position 2',6' of the aromatic ring. These residues have shown to increase the activity of opioid peptides, such as endomorphins improving the binding to the opioid receptors. In this work, DMT and DMP have been synthesized following a diketopiperazine-mediated route and the corresponding aspartame derivatives (Asp-DMT-OMe and Asp-DMP-OMe) have been evaluated in vivo and in silico for their activity as synthetic sweeteners.

  14. Spatial distribution of emissions to air – the SPREAD model

    DEFF Research Database (Denmark)

    Plejdrup, Marlene Schmidt; Gyldenkærne, Steen

    The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark’s obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long...... quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation...

  15. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  16. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  17. Analysis and Modeling of Jovian Radio Emissions Observed by Galileo

    Science.gov (United States)

    Menietti, J. D.

    2003-01-01

    Our studies of Jovian radio emission have resulted in the publication of five papers in refereed journals, with three additional papers in progress. The topics of these papers include the study of narrow-band kilometric radio emission; the apparent control of radio emission by Callisto; quasi-periodic radio emission; hectometric attenuation lanes and their relationship to Io volcanic activity; and modeling of HOM attenuation lanes using ray tracing. A further study of the control of radio emission by Jovian satellites is currently in progress. Abstracts of each of these papers are contained in the Appendix. A list of the publication titles are also included.

  18. Modeling 13.3nm Fe XXIII Flare Emissions Using the GOES-R EXIS Instrument

    Science.gov (United States)

    Rook, H.; Thiemann, E.

    2017-12-01

    The solar EUV spectrum is dominated by atomic transitions in ionized atoms in the solar atmosphere. As solar flares evolve, plasma temperatures and densities change, influencing abundances of various ions, changing intensities of different EUV wavelengths observed from the sun. Quantifying solar flare spectral irradiance is important for constraining models of Earth's atmosphere, improving communications quality, and controlling satellite navigation. However, high time cadence measurements of flare irradiance across the entire EUV spectrum were not available prior to the launch of SDO. The EVE MEGS-A instrument aboard SDO collected 0.1nm EUV spectrum data from 2010 until 2014, when the instrument failed. No current or future instrument is capable of similar high resolution and time cadence EUV observation. This necessitates a full EUV spectrum model to study EUV phenomena at Earth. It has been recently demonstrated that one hot flare EUV line, such as the 13.3nm Fe XXIII line, can be used to model cooler flare EUV line emissions, filling the role of MEGS-A. Since unblended measurements of Fe XXIII are typically unavailable, a proxy for the Fe XXIII line must be found. In this study, we construct two models of this line, first using the GOES 0.1-0.8nm soft x-ray (SXR) channel as the Fe XXIII proxy, and second using a physics-based model dependent on GOES emission measure and temperature data. We determine that the more sophisticated physics-based model shows better agreement with Fe XXIII measurements, although the simple proxy model also performs well. We also conclude that the high correlation between Fe XXIII emissions and the GOES 0.1-0.8nm band is because both emissions tend to peak near the GOES emission measure peak despite large differences in their contribution functions.

  19. Modeling of pollutant emissions from road transport; Modelisation des emissions de polluants par le transport routier

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    COPERT III (computer programme to calculate emissions from road transport) is the third version of an MS Windows software programme aiming at the calculation of air pollutant emissions from road transport. COPERT estimates emissions of all regulated air pollutants (CO, NO{sub x}, VOC, PM) produced by different vehicle categories as well as CO{sub 2} emissions on the basis of fuel consumption. This research seminar was organized by the French agency of environment and energy mastery (Ademe) around the following topics: the uncertainties and sensitiveness analysis of the COPERT III model, the presentation of case studies that use COPERT III for the estimation of road transport emissions, and the future of the modeling of road transport emissions: from COPERT III to ARTEMIS (assessment and reliability of transport emission models and inventory systems). This document is a compilation of 8 contributions to this seminar and dealing with: the uncertainty and sensitiveness analysis of the COPERT III model; the road mode emissions of the ESCOMPTE program: sensitivity study; the sensitivity analysis of the spatialized traffic at the time-aggregation level: application in the framework of the INTERREG project (Alsace); the road transport aspect of the regional air quality plan of Bourgogne region: exhaustive consideration of the road network; intercomparison of tools and methods for the inventory of emissions of road transport origin; evolution of the French park of vehicles by 2025: new projections; application of COPERT III to the French context: a new version of IMPACT-ADEME; the European ARTEMIS project: new structural considerations for the modeling of road transport emissions. (J.S.)

  20. Estimation of microbial respiration rates in groundwater by geochemical modeling constrained with stable isotopes

    International Nuclear Information System (INIS)

    Murphy, E.M.

    1998-01-01

    Changes in geochemistry and stable isotopes along a well-established groundwater flow path were used to estimate in situ microbial respiration rates in the Middendorf aquifer in the southeastern United States. Respiration rates were determined for individual terminal electron acceptors including O 2 , MnO 2 , Fe 3+ , and SO 4 2- . The extent of biotic reactions were constrained by the fractionation of stable isotopes of carbon and sulfur. Sulfur isotopes and the presence of sulfur-oxidizing microorganisms indicated that sulfate is produced through the oxidation of reduced sulfur species in the aquifer and not by the dissolution of gypsum, as previously reported. The respiration rates varied along the flow path as the groundwater transitioned between primarily oxic to anoxic conditions. Iron-reducing microorganisms were the largest contributors to the oxidation of organic matter along the portion of the groundwater flow path investigated in this study. The transition zone between oxic and anoxic groundwater contained a wide range of terminal electron acceptors and showed the greatest diversity and numbers of culturable microorganisms and the highest respiration rates. A comparison of respiration rates measured from core samples and pumped groundwater suggests that variability in respiration rates may often reflect the measurement scales, both in the sample volume and the time-frame over which the respiration measurement is averaged. Chemical heterogeneity may create a wide range of respiration rates when the scale of the observation is below the scale of the heterogeneity

  1. Comparative Evaluation of Five Fire Emissions Datasets Using the GEOS-5 Model

    Science.gov (United States)

    Ichoku, C. M.; Pan, X.; Chin, M.; Bian, H.; Darmenov, A.; Ellison, L.; Kucsera, T. L.; da Silva, A. M., Jr.; Petrenko, M. M.; Wang, J.; Ge, C.; Wiedinmyer, C.

    2017-12-01

    Wildfires and other types of biomass burning affect most vegetated parts of the globe, contributing 40% of the annual global atmospheric loading of carbonaceous aerosols, as well as significant amounts of numerous trace gases, such as carbon dioxide, carbon monoxide, and methane. Many of these smoke constituents affect the air quality and/or the climate system directly or through their interactions with solar radiation and cloud properties. However, fire emissions are poorly constrained in global and regional models, resulting in high levels of uncertainty in understanding their real impacts. With the advent of satellite remote sensing of fires and burned areas in the last couple of decades, a number of fire emissions products have become available for use in relevant research and applications. In this study, we evaluated five global biomass burning emissions datasets, namely: (1) GFEDv3.1 (Global Fire Emissions Database version 3.1); (2) GFEDv4s (Global Fire Emissions Database version 4 with small fires); (3) FEERv1 (Fire Energetics and Emissions Research version 1.0); (4) QFEDv2.4 (Quick Fire Emissions Dataset version 2.4); and (5) Fire INventory from NCAR (FINN) version 1.5. Overall, the spatial patterns of biomass burning emissions from these inventories are similar, although the magnitudes of the emissions can be noticeably different. The inventories derived using top-down approaches (QFEDv2.4 and FEERv1) are larger than those based on bottom-up approaches. For example, global organic carbon (OC) emissions in 2008 are: QFEDv2.4 (51.93 Tg), FEERv1 (28.48 Tg), FINN v1.5 (19.48 Tg), GFEDv3.1 (15.65 Tg) and GFEDv4s (13.76 Tg); representing a factor of 3.7 difference between the largest and the least. We also used all five biomass-burning emissions datasets to conduct aerosol simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5), and compared the resulting aerosol optical depth (AOD) output to the corresponding retrievals from MODIS

  2. Sensitivity of modeled ozone concentrations to uncertainties in biogenic emissions

    International Nuclear Information System (INIS)

    Roselle, S.J.

    1992-06-01

    The study examines the sensitivity of regional ozone (O3) modeling to uncertainties in biogenic emissions estimates. The United States Environmental Protection Agency's (EPA) Regional Oxidant Model (ROM) was used to simulate the photochemistry of the northeastern United States for the period July 2-17, 1988. An operational model evaluation showed that ROM had a tendency to underpredict O3 when observed concentrations were above 70-80 ppb and to overpredict O3 when observed values were below this level. On average, the model underpredicted daily maximum O3 by 14 ppb. Spatial patterns of O3, however, were reproduced favorably by the model. Several simulations were performed to analyze the effects of uncertainties in biogenic emissions on predicted O3 and to study the effectiveness of two strategies of controlling anthropogenic emissions for reducing high O3 concentrations. Biogenic hydrocarbon emissions were adjusted by a factor of 3 to account for the existing range of uncertainty in these emissions. The impact of biogenic emission uncertainties on O3 predictions depended upon the availability of NOx. In some extremely NOx-limited areas, increasing the amount of biogenic emissions decreased O3 concentrations. Two control strategies were compared in the simulations: (1) reduced anthropogenic hydrocarbon emissions, and (2) reduced anthropogenic hydrocarbon and NOx emissions. The simulations showed that hydrocarbon emission controls were more beneficial to the New York City area, but that combined NOx and hydrocarbon controls were more beneficial to other areas of the Northeast. Hydrocarbon controls were more effective as biogenic hydrocarbon emissions were reduced, whereas combined NOx and hydrocarbon controls were more effective as biogenic hydrocarbon emissions were increased

  3. Road salt emissions: A comparison of measurements and modelling using the NORTRIP road dust emission model

    Science.gov (United States)

    Denby, B. R.; Ketzel, M.; Ellermann, T.; Stojiljkovic, A.; Kupiainen, K.; Niemi, J. V.; Norman, M.; Johansson, C.; Gustafsson, M.; Blomqvist, G.; Janhäll, S.; Sundvor, I.

    2016-09-01

    De-icing of road surfaces is necessary in many countries during winter to improve vehicle traction. Large amounts of salt, most often sodium chloride, are applied every year. Most of this salt is removed through drainage or traffic spray processes but a certain amount may be suspended, after drying of the road surface, into the air and will contribute to the concentration of particulate matter. Though some measurements of salt concentrations are available near roads, the link between road maintenance salting activities and observed concentrations of salt in ambient air is yet to be quantified. In this study the NORTRIP road dust emission model, which estimates the emissions of both dust and salt from the road surface, is applied at five sites in four Nordic countries for ten separate winter periods where daily mean ambient air measurements of salt concentrations are available. The model is capable of reproducing many of the salt emission episodes, both in time and intensity, but also fails on other occasions. The observed mean concentration of salt in PM10, over all ten datasets, is 4.2 μg/m3 and the modelled mean is 2.8 μg/m3, giving a fractional bias of -0.38. The RMSE of the mean concentrations, over all 10 datasets, is 2.9 μg/m3 with an average R2 of 0.28. The mean concentration of salt is similar to the mean exhaust contribution during the winter periods of 2.6 μg/m3. The contribution of salt to the kerbside winter mean PM10 concentration is estimated to increase by 4.1 ± 3.4 μg/m3 for every kg/m2 of salt applied on the road surface during the winter season. Additional sensitivity studies showed that the accurate logging of salt applications is a prerequisite for predicting salt emissions, as well as good quality data on precipitation. It also highlights the need for more simultaneous measurements of salt loading together with ambient air concentrations to help improve model parameterisations of salt and moisture removal processes.

  4. Modeling carbon emissions from urban traffic system using mobile monitoring.

    Science.gov (United States)

    Sun, Daniel Jian; Zhang, Ying; Xue, Rui; Zhang, Yi

    2017-12-01

    Comprehensive analyses of urban traffic carbon emissions are critical in achieving low-carbon transportation. This paper started from the architecture design of a carbon emission mobile monitoring system using multiple sets of equipment and collected the corresponding data about traffic flow, meteorological conditions, vehicular carbon emissions and driving characteristics on typical roads in Shanghai and Wuxi, Jiangsu province. Based on these data, the emission model MOVES was calibrated and used with various sensitivity and correlation evaluation indices to analyze the traffic carbon emissions at microscopic, mesoscopic and macroscopic levels, respectively. The major factors that influence urban traffic carbon emissions were investigated, so that emission factors of CO, CO 2 and HC were calculated by taking representative passenger cars as a case study. As a result, the urban traffic carbon emissions were assessed quantitatively, and the total amounts of CO, CO 2 and HC emission from passenger cars in Shanghai were estimated as 76.95kt, 8271.91kt, and 2.13kt, respectively. Arterial roads were found as the primary line source, accounting for 50.49% carbon emissions. In additional to the overall major factors identified, the mobile monitoring system and carbon emission quantification method proposed in this study are of rather guiding significance for the further urban low-carbon transportation development. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Simulating the Range Expansion of Spartina alterniflora in Ecological Engineering through Constrained Cellular Automata Model and GIS

    Directory of Open Access Journals (Sweden)

    Zongsheng Zheng

    2015-01-01

    Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.

  6. New high-fidelity terrain modeling method constrained by terrain semanteme.

    Directory of Open Access Journals (Sweden)

    Bo Zhou

    Full Text Available Production of higher-fidelity digital elevation models is important; as such models are indispensable components of space data infrastructure. However, loss of terrain features is a constant problem for grid digital elevation models, although these models have already been defined in such a way that their distinct usage as data sources in terrain modeling processing is prohibited. Therefore, in this study, the novel concept-terrain semanteme is proposed to define local space terrain features, and a new process for generating grid digital elevation models based on this new concept is designed. A prototype system is programmed to test the proposed approach; the results indicate that terrain semanteme can be applied in the process of grid digital elevation model generation, and that usage of this new concept improves the digital elevation model fidelity. Moreover, the terrain semanteme technique can be applied for recovery of distorted digital elevation model regions containing terrain semantemes, with good recovery efficiency indicated by experiments.

  7. An analysis of energy strategies for CO2 emission reduction in China. Case studies by MARKAL model

    International Nuclear Information System (INIS)

    Li Guangya

    1994-12-01

    The China's energy system has been analyzed by using the MARKAL model in this study and the time period is from the year 1990 to 2050. The MARKAL model is applied here to evaluate the cost effective energy strategies for CO 2 emission reduction in China. Firstly the Reference Energy System (RES) of China and its database were established, and the useful energy demand was projected on the basis of China's economic target and demographic forecasting. Four scenarios, BASE1-BASE4 were defined with different assumptions of crude oil and natural uranium availability. Analytical results show that without CO 2 emission constrains coal consumption will continue to hold a dominant position in primary energy supply, and CO 2 emissions in 2050 will be 9.55 BtCO 2 and 10.28 BtCO 2 with different natural uranium availability. Under the CO 2 emission constraints, nuclear and renewable energy will play important roles in CO 2 emission reduction, and feasible maximum CO 2 emission reduction estimated by this study is 3.16 BtCO 2 in 2050. The cumulative CO 2 emission from 1990 to 2050 will be 418.25 BtCO 2 and 429.16 BtCO 2 with different natural uranium availability. Total feasible maximum CO 2 emission reduction from 1990 to 2050 is 95.97 BtCO 2 . (author)

  8. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  9. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  10. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  11. Technical discussions on Emissions and Atmospheric Modeling (TEAM)

    Science.gov (United States)

    Frost, G. J.; Henderson, B.; Lefer, B. L.

    2017-12-01

    A new informal activity, Technical discussions on Emissions and Atmospheric Modeling (TEAM), aims to improve the scientific understanding of emissions and atmospheric processes by leveraging resources through coordination, communication and collaboration between scientists in the Nation's environmental agencies. TEAM seeks to close information gaps that may be limiting emission inventory development and atmospheric modeling and to help identify related research areas that could benefit from additional coordinated efforts. TEAM is designed around webinars and in-person meetings on particular topics that are intended to facilitate active and sustained informal communications between technical staff at different agencies. The first series of TEAM webinars focuses on emissions of nitrogen oxides, a criteria pollutant impacting human and ecosystem health and a key precursor of ozone and particulate matter. Technical staff at Federal agencies with specific interests in emissions and atmospheric modeling are welcome to participate in TEAM.

  12. Probabilistic model for the simulation of secondary electron emission

    Directory of Open Access Journals (Sweden)

    M. A. Furman

    2002-12-01

    Full Text Available We provide a detailed description of a model and its computational algorithm for the secondary electron emission process. The model is based on a broad phenomenological fit to data for the secondary-emission yield and the emitted-energy spectrum. We provide two sets of values for the parameters by fitting our model to two particular data sets, one for copper and the other one for stainless steel.

  13. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  14. Model studies of limitation of carbon dioxide emissions reduction

    International Nuclear Information System (INIS)

    1992-01-01

    The report consists of two papers concerning mitigation of CO 2 emissions in Sweden, ''Limitation of carbon dioxide emissions. Socio-economic effects and the importance of international coordination'', and ''Model calculations for Sweden's energy system with carbon dioxide limitations''. Separate abstracts were prepared for both of the papers

  15. Methane emissions from rice paddies : experiments and modelling

    NARCIS (Netherlands)

    Bodegom, van P.M.

    2000-01-01

    This thesis describes model development and experimentation on the comprehension and prediction of methane (CH 4 ) emissions from rice paddies. The large spatial and temporal variability in CH 4 emissions and the dynamic non-linear relationships

  16. A new modelling approach for road traffic emissions: VERSIT+

    NARCIS (Netherlands)

    Smit, R.; Smokers, R.T.M.; Rabé, E.L.M.

    2007-01-01

    The objective of VERSIT+ LD is to predict traffic stream emissions for light-duty vehicles in any particular traffic situation. With respect to hot running emissions, VERSIT+ LD consists of a set of statistical models for detailed vehicle categories that have been constructed using multiple linear

  17. Modeling Optical and Radiative Properties of Clouds Constrained with CARDEX Observations

    Science.gov (United States)

    Mishra, S. K.; Praveen, P. S.; Ramanathan, V.

    2013-12-01

    Carbonaceous aerosols (CA) have important effects on climate by directly absorbing solar radiation and indirectly changing cloud properties. These particles tend to be a complex mixture of graphitic carbon and organic compounds. The graphitic component, called as elemental carbon (EC), is characterized by significant absorption of solar radiation. Recent studies showed that organic carbon (OC) aerosols absorb strongly near UV region, and this faction is known as Brown Carbon (BrC). The indirect effect of CA can occur in two ways, first by changing the thermal structure of the atmosphere which further affects dynamical processes governing cloud life cycle; secondly, by acting as cloud condensation nuclei (CCN) that can change cloud radiative properties. In this work, cloud optical properties have been numerically estimated by accounting for CAEDEX (Cloud Aerosol Radiative Forcing Dynamics Experiment) observed cloud parameters and the physico-chemical and optical properties of aerosols. The aerosol inclusions in the cloud drop have been considered as core shell structure with core as EC and shell comprising of ammonium sulfate, ammonium nitrate, sea salt and organic carbon (organic acids, OA and brown carbon, BrC). The EC/OC ratio of the inclusion particles have been constrained based on observations. Moderate and heavy pollution events have been decided based on the aerosol number and BC concentration. Cloud drop's co-albedo at 550nm was found nearly identical for pure EC sphere inclusions and core-shell inclusions with all non-absorbing organics in the shell. However, co-albedo was found to increase for the drop having all BrC in the shell. The co-albedo of a cloud drop was found to be the maximum for all aerosol present as interstitial compare to 50% and 0% inclusions existing as interstitial aerosols. The co-albedo was found to be ~ 9.87e-4 for the drop with 100% inclusions existing as interstitial aerosols externally mixed with micron size mineral dust with 2

  18. A Stochastic Multi-Objective Chance-Constrained Programming Model for Water Supply Management in Xiaoqing River Watershed

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2017-05-01

    Full Text Available In this paper, a stochastic multi-objective chance-constrained programming model (SMOCCP was developed for tackling the water supply management problem. Two objectives were included in this model, which are the minimization of leakage loss amounts and total system cost, respectively. The traditional SCCP model required the random variables to be expressed in the normal distributions, although their statistical characteristics were suitably reflected by other forms. The SMOCCP model allows the random variables to be expressed in log-normal distributions, rather than general normal form. Possible solution deviation caused by irrational parameter assumption was avoided and the feasibility and accuracy of generated solutions were ensured. The water supply system in the Xiaoqing River watershed was used as a study case for demonstration. Under the context of various weight combinations and probabilistic levels, many types of solutions are obtained, which are expressed as a series of transferred amounts from water sources to treated plants, from treated plants to reservoirs, as well as from reservoirs to tributaries. It is concluded that the SMOCCP model could reflect the sketch of the studied region and generate desired water supply schemes under complex uncertainties. The successful application of the proposed model is expected to be a good example for water resource management in other watersheds.

  19. A novel robust chance constrained possibilistic programming model for disaster relief logistics under uncertainty

    Directory of Open Access Journals (Sweden)

    Maryam Rahafrooz

    2016-09-01

    Full Text Available In this paper, a novel multi-objective robust possibilistic programming model is proposed, which simultaneously considers maximizing the distributive justice in relief distribution, minimizing the risk of relief distribution, and minimizing the total logistics costs. To effectively cope with the uncertainties of the after-disaster environment, the uncertain parameters of the proposed model are considered in the form of fuzzy trapezoidal numbers. The proposed model not only considers relief commodities priority and demand points priority in relief distribution, but also considers the difference between the pre-disaster and post-disaster supply abilities of the suppliers. In order to solve the proposed model, the LP-metric and the improved augmented ε-constraint methods are used. Second, a set of test problems are designed to evaluate the effectiveness of the proposed robust model against its equivalent deterministic form, which reveales the capabilities of the robust model. Finally, to illustrate the performance of the proposed robust model, a seismic region of northwestern Iran (East Azerbaijan is selected as a case study to model its relief logistics in the face of future earthquakes. This investigation indicates the usefulness of the proposed model in the field of crisis.

  20. Spatial distribution of emissions to air - the SPREAD model

    Energy Technology Data Exchange (ETDEWEB)

    Plejdrup, M S; Gyldenkaerne, S

    2011-04-15

    The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark's obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously, a distribution on the 17x17 km EMEP grid has been set up and used in research projects combined with detailed distributions for a few sectors or sub-sectors e.g. a distribution for emissions from road traffic on 1x1 km resolution. SPREAD is developed to generate improved spatial emission data for e.g. air quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation of distributions for single sectors and for a number of sub-sectors and single sources as well. This report documents the methodologies in this first version of SPREAD and presents selected results. Further, a number of potential improvements for later versions of SPREAD are addressed and discussed. (Author)

  1. Spatial distribution of emissions to air - the SPREAD model

    Energy Technology Data Exchange (ETDEWEB)

    Plejdrup, M.S.; Gyldenkaerne, S.

    2011-04-15

    The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark's obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously, a distribution on the 17x17 km EMEP grid has been set up and used in research projects combined with detailed distributions for a few sectors or sub-sectors e.g. a distribution for emissions from road traffic on 1x1 km resolution. SPREAD is developed to generate improved spatial emission data for e.g. air quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation of distributions for single sectors and for a number of sub-sectors and single sources as well. This report documents the methodologies in this first version of SPREAD and presents selected results. Further, a number of potential improvements for later versions of SPREAD are addressed and discussed. (Author)

  2. The costs of mitigating carbon emissions in China: findings from China MARKAL-MACRO modeling

    International Nuclear Information System (INIS)

    Chen Wenying

    2005-01-01

    In this paper MARKAL-MACRO, an integrated energy-environment-economy model, is used to generate China's reference scenario for future energy development and carbon emission through the year 2050. The results show that with great efforts on structure adjustment, energy efficiency improvement and energy substitution, China's primary energy consumption is expected to be 4818 Mtce and carbon emission 2394 MtC by 2050 with annual decrease rate of 3% for the carbon intensity per GDP during the period 2000-2050. On the basis of this reference scenario, China's marginal abatement cost curves of carbon for the year 2010, 2020 and 2030 are derived from the model, and the impacts of carbon emission abatement on GDP are also simulated. The results are compared with those from other sources. The research shows that the marginal abatement costs vary from 12US$/tC to 216US$/tC and the rates of GDP losses relative to reference range from 0.1% to 2.54% for the reduction rates between 5% and 45%. Both the marginal abatement costs and the rates of GDP losses further enlarge on condition that the maximum capacity of nuclear power is constrained to 240 GW or 160 GW by 2050. The paper concludes that China's costs of carbon abatement is rather high in case of carbon emissions are further cut beyond the reference scenario, and China's carbon abatement room is limited due to her coal-dominant energy resource characteristic. As economic development still remains the priority and per capita income as well as per capita carbon emission are far below the world average, it will be more realistic for China to make continuous contributions to combating global climate change by implementing sustainable development strategy domestically and playing an active role in the international carbon mitigation cooperation mechanisms rather than accepting a carbon emission ceiling

  3. Consistency checks in beam emission modeling for neutral beam injectors

    International Nuclear Information System (INIS)

    Punyapu, Bharathi; Vattipalle, Prahlad; Sharma, Sanjeev Kumar; Baruah, Ujjwal Kumar; Crowley, Brendan

    2015-01-01

    In positive neutral beam systems, the beam parameters such as ion species fractions, power fractions and beam divergence are routinely measured using Doppler shifted beam emission spectrum. The accuracy with which these parameters are estimated depend on the accuracy of the atomic modeling involved in these estimations. In this work, an effective procedure to check the consistency of the beam emission modeling in neutral beam injectors is proposed. As a first consistency check, at a constant beam voltage and current, the intensity of the beam emission spectrum is measured by varying the pressure in the neutralizer. Then, the scaling of measured intensity of un-shifted (target) and Doppler shifted intensities (projectile) of the beam emission spectrum at these pressure values are studied. If the un-shifted component scales with pressure, then the intensity of this component will be used as a second consistency check on the beam emission modeling. As a further check, the modeled beam fractions and emission cross sections of projectile and target are used to predict the intensity of the un-shifted component and then compared with the value of measured target intensity. An agreement between the predicted and measured target intensities provide the degree of discrepancy in the beam emission modeling. In order to test this methodology, a systematic analysis of Doppler shift spectroscopy data obtained on the JET neutral beam test stand data was carried out

  4. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  5. How to constrain multi-objective calibrations of the SWAT model using water balance components

    Science.gov (United States)

    Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...

  6. Global emissions and models of photochemically active compounds

    International Nuclear Information System (INIS)

    Penner, J.E.; Atherton, C.S.; Graedel, T.E.

    1993-01-01

    Anthropogenic emissions from industrial activity, fossil fuel combustion, and biomass burning are now known to be large enough (relative to natural sources) to perturb the chemistry of vast regions of the troposphere. A goal of the IGAC Global Emissions Inventory Activity (GEIA) is to provide authoritative and reliable emissions inventories on a 1 degree x 1 degree grid. When combined with atmospheric photochemical models, these high quality emissions inventories may be used to predict the concentrations of major photochemical products. Comparison of model results with measurements of pertinent species allows us to understand whether there are major shortcomings in our understanding of tropospheric photochemistry, the budgets and transport of trace species, and their effects in the atmosphere. Through this activity, we are building the capability to make confident predictions of the future consequences of anthropogenic emissions. This paper compares IGAC recommended emissions inventories for reactive nitrogen and sulfur dioxide to those that have been in use previously. We also present results from the three-dimensional LLNL atmospheric chemistry model that show how emissions of anthropogenic nitrogen oxides might potentially affect tropospheric ozone and OH concentrations and how emissions of anthropogenic sulfur increase sulfate aerosol loadings

  7. [Measurement model of carbon emission from forest fire: a review].

    Science.gov (United States)

    Hu, Hai-Qing; Wei, Shu-Jing; Jin, Sen; Sun, Long

    2012-05-01

    Forest fire is the main disturbance factor for forest ecosystem, and an important pathway of the decrease of vegetation- and soil carbon storage. Large amount of carbonaceous gases in forest fire can release into atmosphere, giving remarkable impacts on the atmospheric carbon balance and global climate change. To scientifically and effectively measure the carbonaceous gases emission from forest fire is of importance in understanding the significance of forest fire in the carbon balance and climate change. This paper reviewed the research progress in the measurement model of carbon emission from forest fire, which covered three critical issues, i. e., measurement methods of forest fire-induced total carbon emission and carbonaceous gases emission, affecting factors and measurement parameters of measurement model, and cause analysis of the uncertainty in the measurement of the carbon emissions. Three path selections to improve the quantitative measurement of the carbon emissions were proposed, i. e., using high resolution remote sensing data and improving algorithm and estimation accuracy of burned area in combining with effective fuel measurement model to improve the accuracy of the estimated fuel load, using high resolution remote sensing images combined with indoor controlled environment experiments, field measurements, and field ground surveys to determine the combustion efficiency, and combining indoor controlled environment experiments with field air sampling to determine the emission factors and emission ratio.

  8. Constrained quadratic stabilization of discrete-time uncertain nonlinear multi-model systems using piecewise affine state-feedback

    Directory of Open Access Journals (Sweden)

    Olav Slupphaug

    1999-07-01

    Full Text Available In this paper a method for nonlinear robust stabilization based on solving a bilinear matrix inequality (BMI feasibility problem is developed. Robustness against model uncertainty is handled. In different non-overlapping regions of the state-space called clusters the plant is assumed to be an element in a polytope which vertices (local models are affine systems. In the clusters containing the origin in their closure, the local models are restricted to be linear systems. The clusters cover the region of interest in the state-space. An affine state-feedback is associated with each cluster. By utilizing the affinity of the local models and the state-feedback, a set of linear matrix inequalities (LMIs combined with a single nonconvex BMI are obtained which, if feasible, guarantee quadratic stability of the origin of the closed-loop. The feasibility problem is attacked by a branch-and-bound based global approach. If the feasibility check is successful, the Liapunov matrix and the piecewise affine state-feedback are given directly by the feasible solution. Control constraints are shown to be representable by LMIs or BMIs, and an application of the control design method to robustify constrained nonlinear model predictive control is presented. Also, the control design method is applied to a simple example.

  9. A Metabolite-Sensitive, Thermodynamically Constrained Model of Cardiac Cross-Bridge Cycling: Implications for Force Development during Ischemia

    KAUST Repository

    Tran, Kenneth; Smith, Nicolas P.; Loiselle, Denis S.; Crampin, Edmund J.

    2010-01-01

    We present a metabolically regulated model of cardiac active force generation with which we investigate the effects of ischemia on maximum force production. Our model, based on a model of cross-bridge kinetics that was developed by others, reproduces many of the observed effects of MgATP, MgADP, Pi, and H(+) on force development while retaining the force/length/Ca(2+) properties of the original model. We introduce three new parameters to account for the competitive binding of H(+) to the Ca(2+) binding site on troponin C and the binding of MgADP within the cross-bridge cycle. These parameters, along with the Pi and H(+) regulatory steps within the cross-bridge cycle, were constrained using data from the literature and validated using a range of metabolic and sinusoidal length perturbation protocols. The placement of the MgADP binding step between two strongly-bound and force-generating states leads to the emergence of an unexpected effect on the force-MgADP curve, where the trend of the relationship (positive or negative) depends on the concentrations of the other metabolites and [H(+)]. The model is used to investigate the sensitivity of maximum force production to changes in metabolite concentrations during the development of ischemia.

  10. Constraining the Influence of Natural Variability to Improve Estimates of Global Aerosol Indirect Effects in a Nudged Version of the Community Atmosphere Model 5

    Energy Technology Data Exchange (ETDEWEB)

    Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn

    2012-12-11

    Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.

  11. Dissecting galaxy formation models with sensitivity analysis—a new approach to constrain the Milky Way formation history

    International Nuclear Information System (INIS)

    Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.

    2014-01-01

    We present an application of a statistical tool known as sensitivity analysis to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological N-body simulations. We show how a sensitivity analysis can be performed on our chemo-dynamical model, ChemTreeN, to characterize and quantify its relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs (i.e., observational predictions) of interest. Conversely, sensitivity analysis allows us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated with the Milky Way, such as the luminosity function of the dwarf satellites. The results from the sensitivity analysis are used to train specific model emulators of ChemTreeN, only involving the most relevant input parameters. This allowed us to efficiently explore the input parameter space. A statistical comparison of model outputs and real observables is used to obtain a 'best-fitting' parameter set. We consider different Milky-Way-like dark matter halos to account for the dependence of the best-fitting parameter selection process on the underlying merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one of the resulting stellar halo models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis, it is possible to disregard certain models, and their

  12. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  13. Dislocation unpinning model of acoustic emission from alkali halide ...

    Indian Academy of Sciences (India)

    The present paper reports the dislocation unpinning model of acoustic emis- sion (AE) from ... Acoustic emission; dislocation; alkali halide crystals; plastic deformation. ..... [5] T Nishimura, A Tahara and T Kolama, Jpn. Metal Inst. 64, 339 (2000).

  14. Mapping pan-Arctic CH4 emissions using an adjoint method by integrating process-based wetland and lake biogeochemical models and atmospheric CH4 concentrations

    Science.gov (United States)

    Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E. J.; Sweeney, C.; Turner, A. J.

    2015-12-01

    Understanding CH4 emissions from wetlands and lakes are critical for the estimation of Arctic carbon balance under fast warming climatic conditions. To date, our knowledge about these two CH4 sources is almost solely built on the upscaling of discontinuous measurements in limited areas to the whole region. Many studies indicated that, the controls of CH4 emissions from wetlands and lakes including soil moisture, lake morphology and substrate content and quality are notoriously heterogeneous, thus the accuracy of those simple estimates could be questionable. Here we apply a high spatial resolution atmospheric inverse model (nested-grid GEOS-Chem Adjoint) over the Arctic by integrating SCIAMACHY and NOAA/ESRL CH4 measurements to constrain the CH4 emissions estimated with process-based wetland and lake biogeochemical models. Our modeling experiments using different wetland CH4 emission schemes and satellite and surface measurements show that the total amount of CH4 emitted from the Arctic wetlands is well constrained, but the spatial distribution of CH4 emissions is sensitive to priors. For CH4 emissions from lakes, our high-resolution inversion shows that the models overestimate CH4 emissions in Alaskan costal lowlands and East Siberian lowlands. Our study also indicates that the precision and coverage of measurements need to be improved to achieve more accurate high-resolution estimates.

  15. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  16. Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Ricardo L.S. [Universidade Federal de Santa Maria, Departamento de Fisica, Santa Maria, RS (Brazil); Kent State University, Physics Department, Kent, OH (United States); Timoteo, Varese S. [Universidade Estadual de Campinas (UNICAMP), Grupo de Optica e Modelagem Numerica (GOMNI), Faculdade de Tecnologia, Limeira, SP (Brazil); Avancini, Sidney S.; Pinto, Marcus B. [Universidade Federal de Santa Catarina, Departamento de Fisica, Florianopolis, Santa Catarina (Brazil); Krein, Gastao [Universidade Estadual Paulista, Instituto de Fisica Teorica, Sao Paulo, SP (Brazil)

    2017-05-15

    The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B, T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with G(B, T) are compared with the ones obtained at constant coupling, G. The model with G(B, T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic-field-dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the coupling constant can be easily implemented to improve typical model applications to magnetized quark matter. (orig.)

  17. Joint modeling of constrained path enumeration and path choice behavior: a semi-compensatory approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2010-01-01

    A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...

  18. 4-D imaging of seepage in earthen embankments with time-lapse inversion of self-potential data constrained by acoustic emissions localization

    Science.gov (United States)

    Rittgers, J. B.; Revil, A.; Planes, T.; Mooney, M. A.; Koelewijn, A. R.

    2015-02-01

    New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.

  19. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  20. A Three-Dimensional Model of the Marine Nitrogen Cycle during the Last Glacial Maximum Constrained by Sedimentary Isotopes

    Directory of Open Access Journals (Sweden)

    Christopher J. Somes

    2017-05-01

    Full Text Available Nitrogen is a key limiting nutrient that influences marine productivity and carbon sequestration in the ocean via the biological pump. In this study, we present the first estimates of nitrogen cycling in a coupled 3D ocean-biogeochemistry-isotope model forced with realistic boundary conditions from the Last Glacial Maximum (LGM ~21,000 years before present constrained by nitrogen isotopes. The model predicts a large decrease in nitrogen loss rates due to higher oxygen concentrations in the thermocline and sea level drop, and, as a response, reduced nitrogen fixation. Model experiments are performed to evaluate effects of hypothesized increases of atmospheric iron fluxes and oceanic phosphorus inventory relative to present-day conditions. Enhanced atmospheric iron deposition, which is required to reproduce observations, fuels export production in the Southern Ocean causing increased deep ocean nutrient storage. This reduces transport of preformed nutrients to the tropics via mode waters, thereby decreasing productivity, oxygen deficient zones, and water column N-loss there. A larger global phosphorus inventory up to 15% cannot be excluded from the currently available nitrogen isotope data. It stimulates additional nitrogen fixation that increases the global oceanic nitrogen inventory, productivity, and water column N-loss. Among our sensitivity simulations, the best agreements with nitrogen isotope data from LGM sediments indicate that water column and sedimentary N-loss were reduced by 17–62% and 35–69%, respectively, relative to preindustrial values. Our model demonstrates that multiple processes alter the nitrogen isotopic signal in most locations, which creates large uncertainties when quantitatively constraining individual nitrogen cycling processes. One key uncertainty is nitrogen fixation, which decreases by 25–65% in the model during the LGM mainly in response to reduced N-loss, due to the lack of observations in the open ocean most

  1. Development of a forecast model for global air traffic emissions

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Martin

    2012-07-01

    The thesis describes the methodology and results of a simulation model that quantifies fuel consumption and emissions of civil air traffic. Besides covering historical emissions, the model aims at forecasting emissions in the medium-term future. For this purpose, simulation models of aircraft and engine types are used in combination with a database of global flight movements and assumptions about traffic growth, fleet rollover and operational aspects. Results from an application of the model include emissions of scheduled air traffic for the years 2000 to 2010 as well as forecasted emissions until the year 2030. In a baseline scenario of the forecast, input assumptions (e.g. traffic growth rates) are in line with predictions by the aircraft industry. Considering the effects of advanced technologies of the short-term and medium-term future, the forecast focusses on fuel consumption and emissions of nitric oxides. Calculations for historical air traffic additionally cover emissions of carbon monoxide, unburned hydrocarbons and soot. Results are validated against reference data including studies by the International Civil Aviation Organization (ICAO) and simulation results from international research projects. (orig.)

  2. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.; Evans, Jason P.; McCabe, Matthew

    2014-01-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  3. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.

    2014-09-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  4. Attitudinal travel demand model for non-work trips of homogeneously constrained segments of a population

    Energy Technology Data Exchange (ETDEWEB)

    Recker, W.W.; Stevens, R.F.

    1977-06-01

    Market-segmentation techniques are used to capture effects of opportunity and availability constraints on urban residents' choice of mode for trips for major grocery shopping and for visiting friends and acquaintances. Attitudinal multinomial logit choice models are estimated for each market segment. Explanatory variables are individual's beliefs about attributes of four modal alternatives: bus, car, taxi and walking. Factor analysis is employed to identify latent dimensions of perception of the modal alternatives and to eliminate problems of multicollinearity in model estimation.

  5. Constraining dark photon model with dark matter from CMB spectral distortions

    Directory of Open Access Journals (Sweden)

    Ki-Young Choi

    2017-08-01

    Full Text Available Many extensions of Standard Model (SM include a dark sector which can interact with the SM sector via a light mediator. We explore the possibilities to probe such a dark sector by studying the distortion of the CMB spectrum from the blackbody shape due to the elastic scatterings between the dark matter and baryons through a hidden light mediator. We in particular focus on the model where the dark sector gauge boson kinetically mixes with the SM and present the future experimental prospect for a PIXIE-like experiment along with its comparison to the existing bounds from complementary terrestrial experiments.

  6. Assessing water resources in Azerbaijan using a local distributed model forced and constrained with global data

    Science.gov (United States)

    Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine

    2017-04-01

    In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well

  7. A frictionally and hydraulically constrained model of the convectively driven mean flow in partially enclosed seas

    Science.gov (United States)

    Maxworthy, T.

    1997-08-01

    A simple three-layer model of the dynamics of partially enclosed seas, driven by a surface buoyancy flux, is presented. It contains two major elements, a hydraulic constraint at the exit contraction and friction in the interior of the main body of the sea; both together determine the vertical structure and magnitudes of the interior flow variables, i.e. velocity and density. Application of the model to the large-scale dynamics of the Red Sea gives results that are not in disagreement with observation once the model is applied, also, to predict the dense outflow from the Gulf of Suez. The latter appears to be the agent responsible for the formation of dense bottom water in this system. Also, the model is reasonably successful in predicting the density of the outflow from the Persian Gulf, and can be applied to any number of other examples of convectively driven flow in long, narrow channels, with or without sills and constrictions at their exits.

  8. Electron-capture Isotopes Could Constrain Cosmic-Ray Propagation Models

    Science.gov (United States)

    Benyamin, David; Shaviv, Nir J.; Piran, Tsvi

    2017-12-01

    Electron capture (EC) isotopes are known to provide constraints on the low-energy behavior of cosmic rays (CRs), such as reacceleration. Here, we study the EC isotopes within the framework of the dynamic spiral-arms CR propagation model in which most of the CR sources reside in the galactic spiral arms. The model was previously used to explain the B/C and sub-Fe/Fe ratios. We show that the known inconsistency between the 49Ti/49V and 51V/51Cr ratios remains also in the spiral-arms model. On the other hand, unlike the general wisdom that says the isotope ratios depend primarily on reacceleration, we find here that the ratio also depends on the halo size (Z h) and, in spiral-arms models, also on the time since the last spiral-arm passage ({τ }{arm}). Namely, EC isotopes can, in principle, provide interesting constraints on the diffusion geometry. However, with the present uncertainties in the lab measurements of both the electron attachment rate and the fragmentation cross sections, no meaningful constraint can be placed.

  9. Using expert knowledge of the hydrological system to constrain multi-objective calibration of SWAT models

    Science.gov (United States)

    The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...

  10. Constraining biogenic silica dissolution in marine sediments: a comparison between diagenetic models and experimental dissolution rates

    NARCIS (Netherlands)

    Khalil, K.; Rabouille, C.; Gallinari, M.; Soetaert, K.E.R.; DeMaster, D.J.; Ragueneau, O.

    2007-01-01

    The processes controlling preservation and recycling of particulate biogenic silica in sediments must be understood in order to calculate oceanic silica mass balances. The new contribution of this work is the coupled use of advanced models including reprecipitation and different phases of biogenic

  11. Effects of time-varying β in SNLS3 on constraining interacting dark energy models

    International Nuclear Information System (INIS)

    Wang, Shuang; Wang, Yong-Zhen; Geng, Jia-Jia; Zhang, Xin

    2014-01-01

    It has been found that, for the Supernova Legacy Survey three-year (SNLS3) data, there is strong evidence for the redshift evolution of the color-luminosity parameter β. In this paper, adopting the w-cold-dark-matter (wCDM) model and considering its interacting extensions (with three kinds of interaction between dark sectors), we explore the evolution of β and its effects on parameter estimation. In addition to the SNLS3 data, we also use the latest Planck distance priors data, the galaxy clustering data extracted from sloan digital sky survey data release 7 and baryon oscillation spectroscopic survey, as well as the direct measurement of Hubble constant H 0 from the Hubble Space Telescope observation. We find that, for all the interacting dark energy (IDE) models, adding a parameter of β can reduce χ 2 by ∝34, indicating that a constant β is ruled out at 5.8σ confidence level. Furthermore, it is found that varying β can significantly change the fitting results of various cosmological parameters: for all the dark energy models considered in this paper, varying β yields a larger fractional CDM densities Ω c0 and a larger equation of state w; on the other side, varying β yields a smaller reduced Hubble constant h for the wCDM model, but it has no impact on h for the three IDE models. This implies that there is a degeneracy between h and coupling parameter γ. Our work shows that the evolution of β is insensitive to the interaction between dark sectors, and then highlights the importance of considering β's evolution in the cosmology fits. (orig.)

  12. Modeling Studies to Constrain Fluid and Gas Migration Associated with Hydraulic Fracturing Operations

    Science.gov (United States)

    Rajaram, H.; Birdsell, D.; Lackey, G.; Karra, S.; Viswanathan, H. S.; Dempsey, D.

    2015-12-01

    The dramatic increase in the extraction of unconventional oil and gas resources using horizontal wells and hydraulic fracturing (fracking) technologies has raised concerns about potential environmental impacts. Large volumes of hydraulic fracturing fluids are injected during fracking. Incidents of stray gas occurrence in shallow aquifers overlying shale gas reservoirs have been reported; whether these are in any way related to fracking continues to be debated. Computational models serve as useful tools for evaluating potential environmental impacts. We present modeling studies of hydraulic fracturing fluid and gas migration during the various stages of well operation, production, and subsequent plugging. The fluid migration models account for overpressure in the gas reservoir, density contrast between injected fluids and brine, imbibition into partially saturated shale, and well operations. Our results highlight the importance of representing the different stages of well operation consistently. Most importantly, well suction and imbibition both play a significant role in limiting upward migration of injected fluids, even in the presence of permeable connecting pathways. In an overall assessment, our fluid migration simulations suggest very low risk to groundwater aquifers when the vertical separation from a shale gas reservoir is of the order of 1000' or more. Multi-phase models of gas migration were developed to couple flow and transport in compromised wellbores and subsurface formations. These models are useful for evaluating both short-term and long-term scenarios of stray methane release. We present simulation results to evaluate mechanisms controlling stray gas migration, and explore relationships between bradenhead pressures and the likelihood of methane release and transport.

  13. GRAVITATIONAL-WAVE OBSERVATIONS MAY CONSTRAIN GAMMA-RAY BURST MODELS: THE CASE OF GW150914–GBM

    Energy Technology Data Exchange (ETDEWEB)

    Veres, P. [CSPAR, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Preece, R. D. [Dept. of Space Science, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Goldstein, A.; Connaughton, V. [Universities Space Research Association, 320 Sparkman Dr. Huntsville, AL 35806 (United States); Mészáros, P. [Dept. of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Burns, E., E-mail: peter.veres@uah.edu [Physics Dept., University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States)

    2016-08-20

    The possible short gamma-ray burst (GRB) observed by Fermi /GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (∼10{sup −3} cm{sup −3}) and a high Lorentz factor (∼2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.

  14. Inverse modeling and mapping US air quality influences of inorganic PM2.5 precursor emissions using the adjoint of GEOS-Chem

    Science.gov (United States)

    Henze, D. K.; Seinfeld, J. H.; Shindell, D. T.

    2009-08-01

    Influences of specific sources of inorganic PM2.5 on peak and ambient aerosol concentrations in the US are evaluated using a combination of inverse modeling and sensitivity analysis. First, sulfate and nitrate aerosol measurements from the IMPROVE network are assimilated using the four-dimensional variational (4D-Var) method into the GEOS-Chem chemical transport model in order to constrain emissions estimates in four separate month-long inversions (one per season). Of the precursor emissions, these observations primarily constrain ammonia (NH3). While the net result is a decrease in estimated US~NH3 emissions relative to the original inventory, there is considerable variability in adjustments made to NH3 emissions in different locations, seasons and source sectors, such as focused decreases in the midwest during July, broad decreases throughout the US~in January, increases in eastern coastal areas in April, and an effective redistribution of emissions from natural to anthropogenic sources. Implementing these constrained emissions, the adjoint model is applied to quantify the influences of emissions on representative PM2.5 air quality metrics within the US. The resulting sensitivity maps display a wide range of spatial, sectoral and seasonal variability in the susceptibility of the air quality metrics to absolute emissions changes and the effectiveness of incremental emissions controls of specific source sectors. NH3 emissions near sources of sulfur oxides (SOx) are estimated to most influence peak inorganic PM2.5 levels in the East; thus, the most effective controls of NH3 emissions are often disjoint from locations of peak NH3 emissions. Controls of emissions from industrial sectors of SOx and NOx are estimated to be more effective than surface emissions, and changes to NH3 emissions in regions dominated by natural sources are disproportionately more effective than regions dominated by anthropogenic sources. NOx controls are most effective in northern states in

  15. Particle Reduction Strategies - PAREST. Traffic emission modelling. Model comparision and alternative scenarios. Sub-report

    International Nuclear Information System (INIS)

    Kugler, Ulrike; Theloke, Jochen; Joerss, Wolfram

    2013-01-01

    The modeling of the reference scenario and the various reduction scenarios in PAREST was based on the Central System of Emissions (CSE) (CSE, 2007). Emissions from road traffic were calculated by using the traffic emission model TREMOD (Knoerr et al., 2005) and fed into the CSE. The version TREMOD 4.17 has been used. The resulting emission levels in PAREST reference scenario were supplemented by the emission-reducing effect of the implementation of the future Euro 5 and 6 emission standards for cars and light commercial vehicles and Euro VI for heavy commercial vehicles in combination with the truck toll extension. [de

  16. Modeling air pollutant emissions from Indian auto-rickshaws: Model development and implications for fleet emission rate estimates

    Science.gov (United States)

    Grieshop, Andrew P.; Boland, Daniel; Reynolds, Conor C. O.; Gouge, Brian; Apte, Joshua S.; Rogak, Steven N.; Kandlikar, Milind

    2012-04-01

    Chassis dynamometer tests were conducted on 40 Indian auto-rickshaws with 3 different fuel-engine combinations operating on the Indian Drive Cycle (IDC). Second-by-second (1 Hz) data were collected and used to develop velocity-acceleration look-up table models for fuel consumption and emissions of CO2, CO, total hydrocarbons (THC), oxides of nitrogen (NOx) and fine particulate matter (PM2.5) for each fuel-engine combination. Models were constructed based on group-average vehicle activity and emissions data in order to represent the performance of a 'typical' vehicle. The models accurately estimated full-cycle emissions for most species, though pollutants with more variable emission rates (e.g., PM2.5) were associated with larger errors. Vehicle emissions data showed large variability for single vehicles ('intra-vehicle variability') and within the test group ('inter-vehicle variability'), complicating the development of a single model to represent a vehicle population. To evaluate the impact of this variability, sensitivity analyses were conducted using vehicle activity data other than the IDC as model input. Inter-vehicle variability dominated the uncertainty in vehicle emission modeling. 'Leave-one-out' analyses indicated that the model outputs were relatively insensitive to the specific sample of vehicles and that the vehicle samples were likely a reasonable representation of the Delhi fleet. Intra-vehicle variability in emissions was also substantial, though had a relatively minor impact on model performance. The models were used to assess whether the IDC, used for emission factor development in India, accurately represents emissions from on-road driving. Modeling based on Global Positioning System (GPS) activity data from real-world auto-rickshaws suggests that, relative to on-road vehicles in Delhi, the IDC systematically under-estimates fuel use and emissions; real-word auto-rickshaws consume 15% more fuel and emit 49% more THC and 16% more PM2.5. The models

  17. Stroke type differentiation using spectrally constrained multifrequency EIT: evaluation of feasibility in a realistic head model

    International Nuclear Information System (INIS)

    Malone, Emma; Jehl, Markus; Arridge, Simon; Betcke, Timo; Holder, David

    2014-01-01

    We investigate the application of multifrequency electrical impedance tomography (MFEIT) to imaging the brain in stroke patients. The use of MFEIT could enable early diagnosis and thrombolysis of ischaemic stroke, and therefore improve the outcome of treatment. Recent advances in the imaging methodology suggest that the use of spectral constraints could allow for the reconstruction of a one-shot image. We performed a simulation study to investigate the feasibility of imaging stroke in a head model with realistic conductivities. We introduced increasing levels of modelling errors to test the robustness of the method to the most common sources of artefact. We considered the case of errors in the electrode placement, spectral constraints, and contact impedance. The results indicate that errors in the position and shape of the electrodes can affect image quality, although our imaging method was successful in identifying tissues with sufficiently distinct spectra. (paper)

  18. Space-Charge-Limited Emission Models for Particle Simulation

    Science.gov (United States)

    Verboncoeur, J. P.; Cartwright, K. L.; Murphy, T.

    2004-11-01

    Space-charge-limited (SCL) emission of electrons from various materials is a common method of generating the high current beams required to drive high power microwave (HPM) sources. In the SCL emission process, sufficient space charge is extracted from a surface, often of complicated geometry, to drive the electric field normal to the surface close to zero. The emitted current is highly dominated by space charge effects as well as ambient fields near the surface. In this work, we consider computational models for the macroscopic SCL emission process including application of Gauss's law and the Child-Langmuir law for space-charge-limited emission. Models are described for ideal conductors, lossy conductors, and dielectrics. Also considered is the discretization of these models, and the implications for the emission physics. Previous work on primary and dual-cell emission models [Watrous et al., Phys. Plasmas 8, 289-296 (2001)] is reexamined, and aspects of the performance, including fidelity and noise properties, are improved. Models for one-dimensional diodes are considered, as well as multidimensional emitting surfaces, which include corners and transverse fields.

  19. Constraining the thermal conditions of impact environments through integrated low-temperature thermochronometry and numerical modeling

    Science.gov (United States)

    Kelly, N. M.; Marchi, S.; Mojzsis, S. J.; Flowers, R. M.; Metcalf, J. R.; Bottke, W. F., Jr.

    2017-12-01

    Impacts have a significant physical and chemical influence on the surface conditions of a planet. The cratering record is used to understand a wide array of impact processes, such as the evolution of the impact flux through time. However, the relationship between impactor size and a resulting impact crater remains controversial (e.g., Bottke et al., 2016). Likewise, small variations in the impact velocity are known to significantly affect the thermal-mechanical disturbances in the aftermath of a collision. Development of more robust numerical models for impact cratering has implications for how we evaluate the disruptive capabilities of impact events, including the extent and duration of thermal anomalies, the volume of ejected material, and the resulting landscape of impacted environments. To address uncertainties in crater scaling relationships, we present an approach and methodology that integrates numerical modeling of the thermal evolution of terrestrial impact craters with low-temperature, (U-Th)/He thermochronometry. The approach uses time-temperature (t-T) paths of crust within an impact crater, generated from numerical simulations of an impact. These t-T paths are then used in forward models to predict the resetting behavior of (U-Th)/He ages in the mineral chronometers apatite and zircon. Differences between the predicted and measured (U-Th)/He ages from a modeled terrestrial impact crater can then be used to evaluate parameters in the original numerical simulations, and refine the crater scaling relationships. We expect our methodology to additionally inform our interpretation of impact products, such as lunar impact breccias and meteorites, providing robust constraints on their thermal histories. In addition, the method is ideal for sample return mission planning - robust "prediction" of ages we expect from a given impact environment enhances our ability to target sampling sites on the Moon, Mars or other solar system bodies where impacts have strongly

  20. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    Directory of Open Access Journals (Sweden)

    J. H. Nienhuis

    2017-09-01

    Full Text Available The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal dynamics or allogenic (external forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  1. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  2. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.; Held, Isaac M.; Stenchikov, Georgiy L.; Zeng, Fanrong; Horowitz, Larry W.

    2014-01-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  3. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.

    2014-10-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  4. Maximizing time from the constraining European Working Time Directive (EWTD): The Heidelberg New Working Time Model.

    Science.gov (United States)

    Schimmack, Simon; Hinz, Ulf; Wagner, Andreas; Schmidt, Thomas; Strothmann, Hendrik; Büchler, Markus W; Schmitz-Winnenthal, Hubertus

    2014-01-01

    The introduction of the European Working Time Directive (EWTD) has greatly reduced training hours of surgical residents, which translates into 30% less surgical and clinical experience. Such a dramatic drop in attendance has serious implications such compromised quality of medical care. As the surgical department of the University of Heidelberg, our goal was to establish a model that was compliant with the EWTD while avoiding reduction in quality of patient care and surgical training. We first performed workload analyses and performance statistics for all working areas of our department (operation theater, emergency room, specialized consultations, surgical wards and on-call duties) using personal interviews, time cards, medical documentation software as well as data of the financial- and personnel-controlling sector of our administration. Using that information, we specifically designed an EWTD-compatible work model and implemented it. Surgical wards and operating rooms (ORs) were not compliant with the EWTD. Between 5 pm and 8 pm, three ORs were still operating two-thirds of the time. By creating an extended work shift (7:30 am-7:30 pm), we effectively reduced the workload to less than 49% from 4 pm and 8 am, allowing the combination of an eight-hour working day with a 16-hour on call duty; thus, maximizing surgical resident training and ensuring patient continuity of care while maintaining EDTW guidelines. A precise workload analysis is the key to success. The Heidelberg New Working Time Model provides a legal model, which, by avoiding rotating work shifts, assures quality of patient care and surgical training.

  5. Modeling emission rates and exposures from outdoor cooking

    Science.gov (United States)

    Edwards, Rufus; Princevac, Marko; Weltman, Robert; Ghasemian, Masoud; Arora, Narendra K.; Bond, Tami

    2017-09-01

    Approximately 3 billion individuals rely on solid fuels for cooking globally. For a large portion of these - an estimated 533 million - cooking is outdoors, where emissions from cookstoves pose a health risk to both cooks and other household and village members. Models that estimate emissions rates from stoves in indoor environments that would meet WHO air quality guidelines (AQG), explicitly don't account for outdoor cooking. The objectives of this paper are to link health based exposure guidelines with emissions from outdoor cookstoves, using a Monte Carlo simulation of cooking times from Haryana India coupled with inverse Gaussian dispersion models. Mean emission rates for outdoor cooking that would result in incremental increases in personal exposure equivalent to the WHO AQG during a 24-h period were 126 ± 13 mg/min for cooking while squatting and 99 ± 10 mg/min while standing. Emission rates modeled for outdoor cooking are substantially higher than emission rates for indoor cooking to meet AQG, because the models estimate impact of emissions on personal exposure concentrations rather than microenvironment concentrations, and because the smoke disperses more readily outdoors compared to indoor environments. As a result, many more stoves including the best performing solid-fuel biomass stoves would meet AQG when cooking outdoors, but may also result in substantial localized neighborhood pollution depending on housing density. Inclusion of the neighborhood impact of pollution should be addressed more formally both in guidelines on emissions rates from stoves that would be protective of health, and also in wider health impact evaluation efforts and burden of disease estimates. Emissions guidelines should better represent the different contexts in which stoves are being used, especially because in these contexts the best performing solid fuel stoves have the potential to provide significant benefits.

  6. Modeling and analysis of strategic forward contracting in transmission constrained power markets

    International Nuclear Information System (INIS)

    Yu, C.W.; Chung, T.S.; Zhang, S.H.; Wang, X.

    2010-01-01

    Taking the effects of transmission network into account, strategic forward contracting induced by the interaction of generation firms' strategies in the spot and forward markets is investigated. A two-stage game model is proposed to describe generation firms' strategic forward contracting and spot market competition. In the spot market, generation firms behave strategically by submitting bids at their nodes in a form of linear supply function (LSF) and there are arbitrageurs who buy and resell power at different nodes where price differences exceed the costs of transmission. The owner of the grid is assumed to ration limited transmission line capacity to maximize the value of the transmission services in the spot market. The Cournot-type competition is assumed for the strategic forward contract market. This two-stage model is formulated as an equilibrium problem with equilibrium constraints (EPEC); in which each firm's optimization problem in the forward market is a mathematical program with equilibrium constraints (MPEC) and parameter-dependent spot market equilibrium as the inner problem. A nonlinear complementarity method is employed to solve this EPEC model. (author)

  7. Constraining volcanic inflation at Three Sisters Volcanic Field in Oregon, USA, through microgravity and deformation modeling

    Science.gov (United States)

    Zurek, Jeffrey; William-Jones, Glyn; Johnson, Dan; Eggers, Al

    2012-10-01

    Microgravity data were collected between 2002 and 2009 at the Three Sisters Volcanic Complex, Oregon, to investigate the causes of an ongoing deformation event west of South Sister volcano. Three different conceptual models have been proposed as the causal mechanism for the deformation event: (1) hydraulic uplift due to continual injection of magma at depth, (2) pressurization of hydrothermal systems and (3) viscoelastic response to an initial pressurization at depth. The gravitational effect of continual magma injection was modeled to be 20 to 33 μGal at the center of the deformation field with volumes based on previous deformation studies. The gravity time series, however, did not detect a mass increase suggesting that a viscoelactic response of the crust is the most likely cause for the deformation from 2002 to 2009. The crust, deeper than 3 km, in the Three Sisters region was modeled as a Maxwell viscoelastic material and the results suggest a dynamic viscosity between 1018 to 5 × 1019 Pa s. This low crustal viscosity suggests that magma emplacement or stall depth is controlled by density and not the brittle ductile transition zone. Furthermore, these crustal properties and the observed geochemical composition gaps at Three Sisters can be best explained by different melt sources and limited magma mixing rather than fractional crystallization. More generally, low intrusion rates, low crustal viscosity, and multiple melt sources could also explain the whole rock compositional gaps observed at other arc volcanoes.

  8. Modeling and Economic Analysis of Power Grid Operations in a Water Constrained System

    Science.gov (United States)

    Zhou, Z.; Xia, Y.; Veselka, T.; Yan, E.; Betrie, G.; Qiu, F.

    2016-12-01

    The power sector is the largest water user in the United States. Depending on the cooling technology employed at a facility, steam-electric power stations withdrawal and consume large amounts of water for each megawatt hour of electricity generated. The amounts are dependent on many factors, including ambient air and water temperatures, cooling technology, etc. Water demands from most economic sectors are typically highest during summertime. For most systems, this coincides with peak electricity demand and consequently a high demand for thermal power plant cooling water. Supplies however are sometimes limited due to seasonal precipitation fluctuations including sporadic droughts that lead to water scarcity. When this occurs there is an impact on both unit commitments and the real-time dispatch. In this work, we model the cooling efficiency of several different types of thermal power generation technologies as a function of power output level and daily temperature profiles. Unit specific relationships are then integrated in a power grid operational model that minimizes total grid production cost while reliably meeting hourly loads. Grid operation is subject to power plant physical constraints, transmission limitations, water availability and environmental constraints such as power plant water exit temperature limits. The model is applied to a standard IEEE-118 bus system under various water availability scenarios. Results show that water availability has a significant impact on power grid economics.

  9. Coupling geophysical investigation with hydrothermal modeling to constrain the enthalpy classification of a potential geothermal resource.

    Science.gov (United States)

    White, Jeremy T.; Karakhanian, Arkadi; Connor, Chuck; Connor, Laura; Hughes, Joseph D.; Malservisi, Rocco; Wetmore, Paul

    2015-01-01

    An appreciable challenge in volcanology and geothermal resource development is to understand the relationships between volcanic systems and low-enthalpy geothermal resources. The enthalpy of an undeveloped geothermal resource in the Karckar region of Armenia is investigated by coupling geophysical and hydrothermal modeling. The results of 3-dimensional inversion of gravity data provide key inputs into a hydrothermal circulation model of the system and associated hot springs, which is used to evaluate possible geothermal system configurations. Hydraulic and thermal properties are specified using maximum a priori estimates. Limited constraints provided by temperature data collected from an existing down-gradient borehole indicate that the geothermal system can most likely be classified as low-enthalpy and liquid dominated. We find the heat source for the system is likely cooling quartz monzonite intrusions in the shallow subsurface and that meteoric recharge in the pull-apart basin circulates to depth, rises along basin-bounding faults and discharges at the hot springs. While other combinations of subsurface properties and geothermal system configurations may fit the temperature distribution equally well, we demonstrate that the low-enthalpy system is reasonably explained based largely on interpretation of surface geophysical data and relatively simple models.

  10. Constraining the Timescales of Rehydration in Nominally Anhydrous Minerals Using 3D Numerical Diffusion Models

    Science.gov (United States)

    Lynn, K. J.; Warren, J. M.

    2017-12-01

    Nominally anhydrous minerals (NAMs) are important for characterizing deep-Earth water reservoirs, but the water contents of olivine (ol), orthopyroxene (opx), and clinopyroxene (cpx) in peridotites generally do not reflect mantle equilibrium conditions. Ol is typically "dry" and decoupled from H in cpx and opx, which is inconsistent with models of partial melting and/or diffusive loss of H during upwelling beneath mid-ocean ridges. The rehydration of mantle pyroxenes via late-stage re-fertilization has been invoked to explain their relatively high water contents. Here, we use sophisticated 3D diffusion models (after Shea et al., 2015, Am Min) of H in ol, opx, and cpx to investigate the timescales of rehydration across a range of conditions relevant for melt-rock interaction and serpentinization of peridotites. Numerical crystals with 1 mm c-axis lengths and realistic crystal morphologies are modeled using recent H diffusivities that account for compositional variation and diffusion anisotropy. Models were run over timescales of minutes to millions of years and temperatures from 300 to 1200°C. Our 3D models show that, at the high-T end of the range, H concentrations in the cores of NAMs are partially re-equilibrated in as little as a few minutes, and completely re-equilibrated within hours to weeks. At low-T (300°C), serpentinization can induce considerable diffusion in cpx and opx. H contents are 30% re-equilibrated after continuous exposure to hydrothermal fluids for 102 and 105 years, respectively, which is inconsistent with previous interpretations that there is no effect on H in opx under similar conditions. Ol is unaffected after 1 Myr due to the slower diffusivity of the proton-vacancy mechanism at 300°C (2-4 log units lower than for opx). In the middle of the T range (700-1000°C), rehydration of opx and cpx occurs over hours to days, while ol is somewhat slower to respond (days to weeks), potentially allowing the decoupling observed in natural samples to

  11. Present mantle flow in North China Craton constrained by seismic anisotropy and numerical modelling

    Science.gov (United States)

    Qu, W.; Guo, Z.; Zhang, H.; Chen, Y. J.

    2017-12-01

    North China Carton (NCC) has undergone complicated geodynamic processes during the Cenozoic, including the westward subduction of the Pacific plate to its east and the collision of the India-Eurasia plates to its southwest. Shear wave splitting measurements in NCC reveal distinct seismic anisotropy patterns at different tectonic blocks, that is, the predominantly NW-SE trending alignment of fast directions in the western NCC and eastern NCC, weak anisotropy within the Ordos block, and N-S fast polarization beneath the Trans-North China Orogen (TNCO). To better understand the origin of seismic anisotropy from SKS splitting in NCC, we obtain a high-resolution dynamic model that absorbs multi-geophysical observations and state-of-the-art numerical methods. We calculate the mantle flow using a most updated version of software ASPECT (Kronbichler et al., 2012) with high-resolution temperature and density structures from a recent 3-D thermal-chemical model by Guo et al. (2016). The thermal-chemical model is obtained by multi-observable probabilistic inversion using high-quality surface wave measurements, potential fields, topography, and surface heat flow (Guo et al., 2016). The viscosity is then estimated by combining the dislocation creep, diffusion creep, and plasticity, which is depended on temperature, pressure, and chemical composition. Then we calculate the seismic anisotropy from the shear deformation of mantle flow by DREX, and predict the fast direction and delay time of SKS splitting. We find that when complex boundary conditions are applied, including the far field effects of the deep subduction of Pacific plate and eastward escaping of Tibetan Plateau, our model can successfully predict the observed shear wave splitting patterns. Our model indicates that seismic anisotropy revealed by SKS is primarily resulting from the LPO of olivine due to the shear deformation from asthenospheric flow. We suggest that two branches of mantle flow may contribute to the

  12. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  13. Modeling of thin-walled structures interacting with acoustic media as constrained two-dimensional continua

    Science.gov (United States)

    Rabinskiy, L. N.; Zhavoronok, S. I.

    2018-04-01

    The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is

  14. Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies

    Science.gov (United States)

    Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.

    2014-12-01

    Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support

  15. The Next Generation of Numerical Modeling in Mergers- Constraining the Star Formation Law

    Science.gov (United States)

    Chien, Li-Hsin

    2010-09-01

    Spectacular images of colliding galaxies like the "Antennae", taken with the Hubble Space Telescope, have revealed that a burst of star/cluster formation occurs whenever gas-rich galaxies interact. A?The ages and locations of these clusters reveal the interaction history and provide crucial clues to the process of star formation in galaxies. A?We propose to carry out state-of-the-art numerical simulations to model six nearby galaxy mergers {Arp 256, NGC 7469, NGC 4038/39, NGC 520, NGC 2623, NGC 3256}, hence increasing the number with this level of sophistication by a factor of 3. These simulations provide specific predictions for the age and spatial distributions of young star clusters. The comparison between these simulation results and the observations will allow us to answer a number of fundamental questions including: 1} is shock-induced or density-dependent star formation the dominant mechanism; 2} are the demographics {i.e. mass and age distributions} of the clusters in different mergers similar, i.e. "universal", or very different; and 3} will it be necessary to include other mechanisms, e.g., locally triggered star formation, in the models to better match the observations?

  16. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  17. Constraining SUSY models with Fittino using measurements before, with and beyond the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Desch, Klaus; Uhlenbrock, Mathias; Wienemann, Peter [Bonn Univ. (Germany). Physikalisches Inst.

    2009-07-15

    We investigate the constraints on Supersymmetry (SUSY) arising from available precision measurements using a global fit approach.When interpreted within minimal supergravity (mSUGRA), the data provide significant constraints on the masses of supersymmetric particles (sparticles), which are predicted to be light enough for an early discovery at the Large Hadron Collider (LHC). We provide predicted mass spectra including, for the first time, full uncertainty bands. The most stringent constraint is from the measurement of the anomalous magnetic moment of the muon. Using the results of these fits, we investigate to which precision mSUGRA and more general MSSM parameters can be measured by the LHC experiments with three different integrated luminosities for a parameter point which approximately lies in the region preferred by current data. The impact of the already available measurements on these precisions, when combined with LHC data, is also studied. We develop a method to treat ambiguities arising from different interpretations of the data within one model and provide a way to differentiate between values of different digital parameters of a model (e. g. sign({mu}) within mSUGRA). Finally, we show how measurements at a linear collider with up to 1 TeV centre-of-mass energy will help to improve precision by an order of magnitude. (orig.)

  18. DNA and dispersal models highlight constrained connectivity in a migratory marine megavertebrate

    Science.gov (United States)

    Naro-Maciel, Eugenia; Hart, Kristen M.; Cruciata, Rossana; Putman, Nathan F.

    2016-01-01

    Population structure and spatial distribution are fundamentally important fields within ecology, evolution, and conservation biology. To investigate pan-Atlantic connectivity of globally endangered green turtles (Chelonia mydas) from two National Parks in Florida, USA, we applied a multidisciplinary approach comparing genetic analysis and ocean circulation modeling. The Everglades (EP) is a juvenile feeding ground, whereas the Dry Tortugas (DT) is used for courtship, breeding, and feeding by adults and juveniles. We sequenced two mitochondrial segments from 138 turtles sampled there from 2006-2015, and simulated oceanic transport to estimate their origins. Genetic and ocean connectivity data revealed northwestern Atlantic rookeries as the major natal sources, while southern and eastern Atlantic contributions were negligible. However, specific rookery estimates differed between genetic and ocean transport models. The combined analyses suggest that post-hatchling drift via ocean currents poorly explains the distribution of neritic juveniles and adults, but juvenile natal homing and population history likely play important roles. DT and EP were genetically similar to feeding grounds along the southern US coast, but highly differentiated from most other Atlantic groups. Despite expanded mitogenomic analysis and correspondingly increased ability to detect genetic variation, no significant differentiation between DT and EP, or among years, sexes or stages was observed. This first genetic analysis of a North Atlantic green turtle courtship area provides rare data supporting local movements and male philopatry. The study highlights the applications of multidisciplinary approaches for ecological research and conservation.

  19. Modelling and Evaluation of Aircraft Emissions. Final report

    International Nuclear Information System (INIS)

    Savola, M.

    1996-01-01

    An application was developed to calculate the emissions and fuel consumption of a jet and turboprop powered aircraft in Finnair's scheduled and charter traffic both globally and in the Finnish flight information regions. The emissions calculated are nitrogen oxides, unburnt hydrocarbons and carbon monoxide. The study is based on traffic statistics of one week taken from three scheduled periods in 1993. Each flight was studied by dividing the flight profile into sections. The flight profile data are based on aircraft manufacturers' manuals, and they serve as initial data for engine manufacturers' emission calculation programs. In addition, the study includes separate calculations on air traffic emissions at airports during the so-called LTO cycle. The fuel consumption calculated for individual flights is 419,395 tonnes globally, and 146,142 tonnes in the Finnish flight information regions. According to Finnair's statistics the global fuel consumption is 0.97-fold compared with the result given by the model. The results indicate that in 1993 the global nitrogen oxide emissions amounted to 5,934 tonnes, the unburnt hydrocarbon emissions totalled 496 tonnes and carbon monoxide emissions 1,664 tonnes. The corresponding emissions in the Finnish flight information regions were as follows: nitrogen oxides 2,105 tonnes, unburnt hydrocarbons 177 tonnes and carbon monoxide 693 tonnes. (orig.)

  20. Modelling Emission of Pollutants from transportation using mobile sensing data

    DEFF Research Database (Denmark)

    Lehmann, Anders

    The advent and the proliferation of the smartphone has promised new possibilities for researchers to gain knowledge about the habits and behaviour of people, as the ubiqui- tous smartphone with an array of sensors is capable of deliver a wealth of information. This dissertation addresses methods...... to use data acquired from smartphones to im- prove transportation related air quality models and models for climate gas emission from transportation. These models can be used for planning of transportation net- works, monitoring of air quality, and automate transport related green accounting. More...... database imple- mentations are a subfield of computer science. I have worked to bring these diverse research fields together to solve the challenge of improving modelling of transporta- tion related air quality emissions as well as modelling of transportation related climate gas emissions. The main...

  1. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    Science.gov (United States)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built

  2. Dynamical phase diagrams of a love capacity constrained prey-predator model

    Science.gov (United States)

    Simin, P. Toranj; Jafari, Gholam Reza; Ausloos, Marcel; Caiafa, Cesar Federico; Caram, Facundo; Sonubi, Adeyemi; Arcagni, Alberto; Stefani, Silvana

    2018-02-01

    One interesting question in love relationships is: finally, what and when is the end of this love relationship? Using a prey-predator Verhulst-Lotka-Volterra (VLV) model we imply cooperation and competition tendency between people in order to describe a "love dilemma game". We select the most simple but immediately most complex case for studying the set of nonlinear differential equations, i.e. that implying three persons, being at the same time prey and predator. We describe four different scenarios in such a love game containing either a one-way love or a love triangle. Our results show that it is hard to love more than one person simultaneously. Moreover, to love several people simultaneously is an unstable state. We find some condition in which persons tend to have a friendly relationship and love someone in spite of their antagonistic interaction. We demonstrate the dynamics by displaying flow diagrams.

  3. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Nonfragile Robust Model Predictive Control for Uncertain Constrained Systems with Time-Delay Compensation

    Directory of Open Access Journals (Sweden)

    Wei Jiang

    2016-01-01

    Full Text Available This study investigates the problem of asymptotic stabilization for a class of discrete-time linear uncertain time-delayed systems with input constraints. Parametric uncertainty is assumed to be structured, and delay is assumed to be known. In Lyapunov stability theory framework, two synthesis schemes of designing nonfragile robust model predictive control (RMPC with time-delay compensation are put forward, where the additive and the multiplicative gain perturbations are, respectively, considered. First, by designing appropriate Lyapunov-Krasovskii (L-K functions, the robust performance index is defined as optimization problems that minimize upper bounds of infinite horizon cost function. Then, to guarantee closed-loop stability, the sufficient conditions for the existence of desired nonfragile RMPC are obtained in terms of linear matrix inequalities (LMIs. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approaches.

  5. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ Model–I: building an emissions data base

    Directory of Open Access Journals (Sweden)

    S. F. Mueller

    2010-05-01

    Full Text Available A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE emissions processing system currently estimates non-methane volatile organic compound (NMVOC emissions from biogenic sources, nitrogen oxide (NOx emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, windblown dust particulate, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide, 44% of total NOx, 80% of reactive carbonaceous gases (NMVOCs and carbon monoxide, 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride, and 84% of fine particles (i.e., those smaller than 2.5 μm in size released into the

  6. Short-Term Power Plant GHG Emissions Forecasting Model

    International Nuclear Information System (INIS)

    Vidovic, D.

    2016-01-01

    In 2010, the share of greenhouse gas (GHG) emissions from power generation in the total emissions at the global level was about 25 percent. From January 1st, 2013 Croatian facilities have been involved in the European Union Emissions Trading System (EU ETS). The share of the ETS sector in total GHG emissions in Croatia in 2012 was about 30 percent, where power plants and heat generation facilities contributed to almost 50 percent. Since 2013 power plants are obliged to purchase all emission allowances. The paper describes the short-term climate forecasting model of greenhouse gas emissions from power plants while covering the daily load diagram of the system. Forecasting is done on an hourly domain typically for one day, it is possible and more days ahead. Forecasting GHG emissions in this way would enable power plant operators to purchase additional or sell surplus allowances on the market at the time. Example that describes the operation of the above mentioned forecasting model is given at the end of the paper.(author).

  7. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  8. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Science.gov (United States)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that

  9. Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios

    Science.gov (United States)

    Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.

    2015-12-01

    Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.

  10. A transition-constrained discrete hidden Markov model for automatic sleep staging

    Directory of Open Access Journals (Sweden)

    Pan Shing-Tai

    2012-08-01

    Full Text Available Abstract Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG recordings including electroencephalograms (EEGs, electrooculograms (EOGs and electromyograms (EMGs, are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM, and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%, and the least-accurately classified stage was S1 ( Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.

  11. Constraining the Physics of AM Canum Venaticorum Systems with the Accretion Disk Instability Model

    Science.gov (United States)

    Cannizzo, John K.; Nelemans, Gijs

    2015-01-01

    Recent work by Levitan et al. has expanded the long-term photometric database for AM CVn stars. In particular, their outburst properties are well correlated with orbital period and allow constraints to be placed on the secular mass transfer rate between secondary and primary if one adopts the disk instability model for the outbursts. We use the observed range of outbursting behavior for AM CVn systems as a function of orbital period to place a constraint on mass transfer rate versus orbital period. We infer a rate approximately 5 x 10(exp -9) solar mass yr(exp -1) ((P(sub orb)/1000 s)(exp -5.2)). We show that the functional form so obtained is consistent with the recurrence time-orbital period relation found by Levitan et al. using a simple theory for the recurrence time. Also, we predict that their steep dependence of outburst duration on orbital period will flatten considerably once the longer orbital period systems have more complete observations.

  12. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    Science.gov (United States)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  13. Numerical modeling of nitrogen oxide emission and experimental verification

    Directory of Open Access Journals (Sweden)

    Szecowka Lech

    2003-12-01

    Full Text Available The results of nitrogen reduction in combustion process with application of primary method are presented in paper. The reduction of NOx emission, by the recirculation of combustion gasses, staging of fuel and of air was investigated, and than the reduction of NOx emission by simultaneous usage of the mentioned above primary method with pulsatory disturbances.The investigations contain numerical modeling of NOx reduction and experimental verification of obtained numerical calculation results.

  14. Effect of GPS errors on Emission model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    n this paper we will show how Global Positioning Services (GPS) data obtained from smartphones can be used to model air quality in urban settings. The paper examines the uncertainty of smartphone location utilising GPS, and ties this location uncertainty to air quality models. The results presented...... in this paper indicates that the location error from using smartphones is within the accuracy needed to use the location data in air quality modelling. The nature of smartphone location data enables more accurate and near real time air quality modelling and monitoring. The location data is harvested from user...

  15. New Approach in Modelling Indonesian Peat Fire Emission

    Science.gov (United States)

    Putra, E. I.; Cochrane, M. A.; Saharjo, B.; Yokelson, R. J.; Stockwell, C.; Vetrita, Y.; Zhang, X.; Hagen, S. C.; Nurhayati, A. D.; Graham, L.

    2017-12-01

    Peat fires are a serious problem for Indonesia, producing devastating environmental effects and making the country the 3rd largest emitter of CO2. Extensive fires ravaged vast areas of peatlands in Sumatra, Kalimantan and Papua during the pronounced El-Nino of 2015, causing international concern when the resultant haze blanketed Indonesia and neighboring countries, severely impacting the health of millions of people. Our recent unprecedented in-situ studies of aerosol and gas emissions from 35 peat fires of varying depths near Palangka Raya, Central Kalimantan have documented the range and variability of emissions from these major fires. We strongly suggest revisions to previously recommended IPPC's emission factors (EFs) from peat fires, notably: CO2 (-8%), CH4 (-55%), NH3 (-86%), and CO (+39%). Our findings clearly showed that Indonesian carbon equivalent measurements (100 years) might have been 19% less than what current IPCC emission factors indicate. The results also demonstrate the toxic air quality in the area with HCN, which is almost only emitted by biomass burning, accounting for 0.28% and the carcinogenic compound formaldehyde 0.04% of emissions. However, considerable variation in emissions may exist between peat fires of different Indonesian peat formations, illustrating the need for additional regional field emissions measurements for parameterizing peatland emissions models for all of Indonesia's major peatland areas. Through the continuous mutual research collaboration between the Indonesian and USA scientists, we will implement our standardized field-based analyses of fuels, hydrology, peat burning characteristics and fire emissions to characterize the three major Indonesian peatland formations across four study provinces (Central Kalimantan, Riau, Jambi and West Papua). We will provide spatial and temporal drivers of the modeled emissions and validate them at a national level using biomass burning emissions estimations derived from Visible

  16. Distributional aspects of emissions in climate change integrated assessment models

    International Nuclear Information System (INIS)

    Cantore, Nicola

    2011-01-01

    The recent failure of Copenhagen negotiations shows that concrete actions are needed to create the conditions for a consensus over global emission reduction policies. A wide coalition of countries in international climate change agreements could be facilitated by the perceived fairness of rich and poor countries of the abatement sharing at international level. In this paper I use two popular climate change integrated assessment models to investigate the path and decompose components and sources of future inequality in the emissions distribution. Results prove to be consistent with previous empirical studies and robust to model comparison and show that gaps in GDP across world regions will still play a crucial role in explaining different countries contributions to global warming. - Research highlights: → I implement a scenario analysis with two global climate change models. → I analyse inequality in the distribution of emissions. → I decompose emissions inequality components. → I find that GDP per capita is the main Kaya identity source of emissions inequality. → Current rich countries will mostly remain responsible for emissions inequality.

  17. Use of stratigraphic models as soft information to constrain stochastic modeling of rock properties: Development of the GSLIB-Lynx integration module

    International Nuclear Information System (INIS)

    Cromer, M.V.; Rautman, C.A.

    1995-10-01

    Rock properties in volcanic units at Yucca Mountain are controlled largely by relatively deterministic geologic processes related to the emplacement, cooling, and alteration history of the tuffaceous lithologic sequence. Differences in the lithologic character of the rocks have been used to subdivide the rock sequence into stratigraphic units, and the deterministic nature of the processes responsible for the character of the different units can be used to infer the rock material properties likely to exist in unsampled regions. This report proposes a quantitative, theoretically justified method of integrating interpretive geometric models, showing the three-dimensional distribution of different stratigraphic units, with numerical stochastic simulation techniques drawn from geostatistics. This integration of soft, constraining geologic information with hard, quantitative measurements of various material properties can produce geologically reasonable, spatially correlated models of rock properties that are free from stochastic artifacts for use in subsequent physical-process modeling, such as the numerical representation of ground-water flow and radionuclide transport. Prototype modeling conducted using the GSLIB-Lynx Integration Module computer program, known as GLINTMOD, has successfully demonstrated the proposed integration technique. The method involves the selection of stratigraphic-unit-specific material-property expected values that are then used to constrain the probability function from which a material property of interest at an unsampled location is simulated

  18. Nonlinear model dynamics for closed-system, constrained, maximal-entropy-generation relaxation by energy redistribution

    International Nuclear Information System (INIS)

    Beretta, Gian Paolo

    2006-01-01

    We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e i with i=1,2,...,N. The time-dependent occupation probabilities p i (t) are assumed to obey the nonlinear rate equations τ dp i /dt=-p i ln p i -α(t)p i -β(t)e i p i where α(t) and β(t) are functionals of the p i (t)'s that maintain invariant the mean energy E=Σ i=1 N e i p i (t) and the normalization condition 1=Σ i=1 N p i (t). The entropy S(t)=-k B Σ i=1 N p i (t)ln p i (t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p i (t) of the rate equations are unique and well defined for arbitrary initial conditions p i (0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p i 's but of their positive square roots √(p i ), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72 (2002)]. We also interpret the functionals k B α(t) and k B β(t) as nonequilibrium generalizations of the thermodynamic-equilibrium Massieu

  19. Modeling the effects of atmospheric emissions on groundwater composition

    International Nuclear Information System (INIS)

    Brown, T.J.

    1994-01-01

    A composite model of atmospheric, unsaturated and groundwater transport is developed to evaluate the processes determining the distribution of atmospherically derived contaminants in groundwater systems and to test the sensitivity of simulated contaminant concentrations to input parameters and model linkages. One application is to screen specific atmospheric emissions for their potential in determining groundwater age. Temporal changes in atmospheric emissions could provide a recognizable pattern in the groundwater system. The model also provides a way for quantifying the significance of uncertainties in the tracer source term and transport parameters on the contaminant distribution in the groundwater system, an essential step in using the distribution of contaminants from local, point source atmospheric emissions to examine conceptual models of groundwater flow and transport

  20. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  1. A comparative analysis of several vehicle emission models for road freight transportation

    NARCIS (Netherlands)

    Demir, E.; Bektas, T.; Laporte, G.

    2011-01-01

    Reducing greenhouse gas emissions in freight transportation requires using appropriate emission models in the planning process. This paper reviews and numerically compares several available freight transportation vehicle emission models and also considers their outputs in relations to field studies.

  2. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    Energy Technology Data Exchange (ETDEWEB)

    Storm, Emma; Weniger, Christoph [GRAPPA, Institute of Physics, University of Amsterdam, Science Park 904, 1090 GL Amsterdam (Netherlands); Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr [LAPTh, CNRS, 9 Chemin de Bellevue, BP-110, Annecy-le-Vieux, 74941, Annecy Cedex (France)

    2017-08-01

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.

  3. Symbiotic star UV emission and theoretical models

    International Nuclear Information System (INIS)

    Kafatos, M.

    1982-01-01

    Observations of symbiotic stars in the far UV have provided important information on the nature of these objects. The canonical spectrum of a symbiotic star, e.g. RW Hya, Z And, AG Peg, is dominated by strong allowed and semiforbidden lines of a variety of at least twice ionized elements. Weaker emission from neutral and singly ionized species is also present. A continuum may or may not be present in the 1200 - 2000 A range but is generally present in the range 2000 - 3200 A range. The suspected hot subdwarf continuum is seen in some cases in the range 1200 - 2000 A (RW Hya, AG Peg, SY Mus). The presence of an accretion disk is difficult to demonstrate and to this date the best candidate for accretion to a main sequence star remains CI Cyg. A number of equations have been derived by the author that can yield the accretion parameters from the observable quantities. Boundary layer temperatures approximately 10 5 K and accretion rates approximately > 10 -5 solar masses/yr are required for accreting main sequence companions. To this date, though, most of the symbiotics may only require the presence of a approximately 10 5 K hot subdwarf. (Auth.)

  4. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    Science.gov (United States)

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  5. Geologic modeling constrained by seismic and dynamical data; Modelisation geologique contrainte par les donnees sismiques et dynamiques

    Energy Technology Data Exchange (ETDEWEB)

    Pianelo, L.

    2001-09-01

    Matching procedures are often used in reservoir production to improve geological models. In reservoir engineering, history matching leads to update petrophysical parameters in fluid flow simulators to fit the results of the calculations with observed data. In the same line, seismic parameters are inverted to allow the numerical recovery of seismic acquisitions. However, it is well known that these inverse problems are poorly constrained. The idea of this original work is to simultaneous match both the permeability and the acoustic impedance of the reservoir, for an enhancement of the resulting geological model. To do so, both parameters are linked using either observed relations and/or the classic Wyllie (porosity impedance) and Carman-Kozeny (porosity-permeability) relationships. Hence production data are added to the seismic match, and seismic observations are used for the permeability recovery. The work consists in developing numerical prototypes of a 3-D fluid flow simulator and a 3-D seismic acquisition simulator. Then, in implementing the coupled inversion loop of the permeability and the acoustic impedance of the two models. We can hence test our theory on a 3-D realistic case. Comparison of the coupled matching with the two classical ones demonstrates the efficiency of our method. We reduce significantly the number of possible solutions, and then the number of scenarios. In addition to that, the augmentation of information leads to a natural improvement of the obtained models, especially in the spatial localization of the permeability contrasts. The improvement is significant, at the same time in the distribution of the two inverted parameters, and in the rapidity of the operation. This work is an important step in a way of data integration, and leads to a better reservoir characterization. This original algorithm could also be useful in reservoir monitoring, history matching and in optimization of production. This new and original method is patented and

  6. Modelling of mid-infrared interferometric signature of hot exozodiacal dust emission

    Science.gov (United States)

    Kirchschlager, Florian; Wolf, Sebastian; Brunngräber, Robert; Matter, Alexis; Krivov, Alexander V.; Labdon, Aaron

    2018-01-01

    Hot exozodiacal dust emission was detected in recent surveys around two dozen main-sequence stars at distances of less than 1 au using the H- and K-band interferometry. Due to the high contrast as well as the small angular distance between the circumstellar dust and the star, direct observation of this dust component is challenging. An alternative way to explore the hot exozodiacal dust is provided by mid-infrared interferometry. We analyse the L, M and N bands interferometric signature of this emission in order to find stronger constraints for the properties and the origin of the hot exozodiacal dust. Considering the parameters of nine debris disc systems derived previously, we model the discs in each of these bands. We find that the M band possesses the best conditions to detect hot dust emission, closely followed by L and N bands. The hot dust in three systems - HD 22484 (10 Tau), HD 102647 (β Leo) and HD 177724 (ζ Aql) - shows a strong signal in the visibility functions, which may even allow one to constrain the dust location. In particular, observations in the mid-infrared could help to determine whether the dust piles up at the sublimation radius or is located at radii up to 1 au. In addition, we explore observations of the hot exozodiacal dust with the upcoming mid-infrared interferometer Multi AperTure mid-Infrared SpectroScopic Experiment (MATISSE) at the Very Large Telescope Interferometer.

  7. Multiplatform inversion of the 2013 Rim Fire smoke emissions using regional-scale modeling: important nocturnal fire activity, air quality, and climate impacts

    Science.gov (United States)

    Saide, P. E.; Peterson, D. A.; da Silva, A. M., Jr.; Ziemba, L. D.; Anderson, B.; Diskin, G. S.; Sachse, G. W.; Hair, J. W.; Butler, C. F.; Fenn, M. A.; Jimenez, J. L.; Campuzano Jost, P.; Dibb, J. E.; Yokelson, R. J.; Toon, B.; Carmichael, G. R.

    2014-12-01

    Large wildfire events are increasingly recognized for their adverse effects on air quality and visibility, thus providing motivation for improving smoke emission estimates. The Rim Fire, one of the largest events in California's history, produced a large smoke plume that was sampled by the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) DC-8 aircraft with a full suite of in-situ and remote sensing measurements on 26-27 August 2013. We developed an inversion methodology which uses the WRF-Chem modeling system to constrain hourly fire emissions, using as initial estimates the NASA Quick Fire Emissions Dataset (QFED). This method differs from the commonly performed top-down estimates that constrain daily (or longer time scale) emissions. The inversion method is able to simultaneously improve the model fit to various SEAC4RS airborne measurements (e.g., organic aerosol, carbon monoxide (CO), aerosol extinction), ground based measurements (e.g., AERONET aerosol optical depth (AOD), CO), and satellite data (MODIS AOD) by modifying fire emissions and utilizing the information content of all these measurements. Preliminary results show that constrained emissions for a 6 day period following the largest fire growth are a factor 2-4 higher than the initial top-down estimates. Moreover, there is a tendency to increase nocturnal emissions by factors sometimes larger than 20, indicating that vigorous fire activity continued during the night. This deviation from a typical diurnal cycle is confirmed using geostationary satellite data. The constrained emissions also have a larger day-to-day variability than the initial emissions and correlate better to daily area burned estimates as observed by airborne infrared measurements (NIROPS). Experiments with the assimilation system show that performing the inversion using only satellite AOD data produces much smaller correction factors than when using all available data

  8. Application of Recent Advances in Forward Modeling of Emissions from Boreal and Temperate Wildfires to Real-time Forecasting of Aerosol and Trace Gas Concentrations

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.; Kasischke, E. S.; Allen, D. J.

    2005-12-01

    The magnitude of trace gas and aerosol emissions from wildfires is a scientific problem with important implications for atmospheric composition, and is also integral to understanding carbon cycling in terrestrial ecosystems. Recent ecological research on modeling wildfire emissions has integrated theoretical advances derived from ecological fieldwork with improved spatial and temporal databases to produce "post facto" estimates of emissions with high spatial and temporal resolution. These advances have been shown to improve agreement with atmospheric observations at coarse scales, but can in principle be applied to applications, such as forecasting, at finer scales. However, several of the approaches employed in these forward models are incompatible with the requirements of real-time forecasting, requiring modification of data inputs and calculation methods. Because of the differences in data inputs used for real-time and "post-facto" emissions modeling, the key uncertainties in the forward problem are not necessarily the same for these two applications. However, adaptation of these advances in forward modeling to forecasting applications has the potential to improve air quality forecasts, and also to provide a large body of experimental data which can be used to constrain crucial uncertainties in current conceptual models of wildfire emissions. This talk describes a forward modeling method developed at the University of Maryland and its application to the Fire Locating and Modeling of Burning Emissions (FLAMBE) system at the Naval Research Laboratory. Methods for applying the outputs of the NRL aerosol forecasting system to the inverse problem of constraining emissions will also be discussed. The system described can use the feedback supplied by atmospheric observations to improve the emissions source description in the forecasting model, and can also be used for hypothesis testing regarding fire behavior and data inputs.

  9. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    Science.gov (United States)

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  10. An Interval Fuzzy-Stochastic Chance-Constrained Programming Based Energy-Water Nexus Model for Planning Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Jing Liu

    2017-11-01

    Full Text Available In this study, an interval fuzzy-stochastic chance-constrained programming based energy-water nexus (IFSCP-WEN model is developed for planning electric power system (EPS. The IFSCP-WEN model can tackle uncertainties expressed as possibility and probability distributions, as well as interval values. Different credibility (i.e., γ levels and probability (i.e., qi levels are set to reflect relationships among water supply, electricity generation, system cost, and constraint-violation risk. Results reveal that different γ and qi levels can lead to a changed system cost, imported electricity, electricity generation, and water supply. Results also disclose that the study EPS would tend to the transition from coal-dominated into clean energy-dominated. Gas-fired would be the main electric utility to supply electricity at the end of the planning horizon, occupying [28.47, 30.34]% (where 28.47% and 30.34% present the lower bound and the upper bound of interval value, respectively of the total electricity generation. Correspondingly, water allocated to gas-fired would reach the highest, occupying [33.92, 34.72]% of total water supply. Surface water would be the main water source, accounting for more than [40.96, 43.44]% of the total water supply. The ratio of recycled water to total water supply would increase by about [11.37, 14.85]%. Results of the IFSCP-WEN model present its potential for sustainable EPS planning by co-optimizing energy and water resources.

  11. Constraining drivers of basin exhumation in the Molasse Basin by combining low-temperature thermochronology, thermal history and kinematic modeling

    Science.gov (United States)

    Luijendijk, Elco; von Hagke, Christoph; Hindle, David

    2017-04-01

    Due to a wealth of geological and thermochronology data the northern foreland basin of the European Alps is an ideal natural laboratory for understanding the dynamics of foreland basins and their interaction with surface and geodynamic processes. The northern foreland basin of the Alps has been exhumed since the Miocene. The timing, rate and cause of this phase of exhumation are still enigmatic. We compile all available thermochronology and organic maturity data and use a new thermal history model, PyBasin, to quantify the rate and timing of exhumation that can explain these data. In addition we quantify the amount of tectonic exhumation using a new kinematic model for the part of the basin that is passively moved above the detachment of the Jura Mountains. Our results show that the vitrinite reflectance, apatite fission track data and cooling rates show no clear difference between the thrusted and folded part of the foreland basin and the undeformed part of the foreland basin. The undeformed plateau Molasse shows a high rate of cooling during the Neogene of 40 to 100 °C, which is equal to >1.0 km of exhumation. Calculated rates of exhumation suggest that drainage reorganization can only explain a small part of the observed exhumation and cooling. Similarly, tectonic transport over a detachment ramp cannot explain the magnitude, timing and wavelength of the observed cooling signal. We conclude that the observed cooling rates suggest large wavelength exhumation that is probably caused by lithospheric-scale processes. In contrast to previous studies we find that the timing of exhumation is poorly constrained. Uncertainty analysis shows that models with timing starting as early as 12 Ma or as late as 2 Ma can all explain the observed data.

  12. Estimation of p,p'-DDT degradation in soil by modeling and constraining hydrological and biogeochemical controls.

    Science.gov (United States)

    Sanka, Ondrej; Kalina, Jiri; Lin, Yan; Deutscher, Jan; Futter, Martyn; Butterfield, Dan; Melymuk, Lisa; Brabec, Karel; Nizzetto, Luca

    2018-08-01

    Despite not being used for decades in most countries, DDT remains ubiquitous in soils due to its persistence and intense past usage. Because of this it is still a pollutant of high global concern. Assessing long term dissipation of DDT from this reservoir is fundamental to understand future environmental and human exposure. Despite a large research effort, key properties controlling fate in soil (in particular, the degradation half-life (τ soil )) are far from being fully quantified. This paper describes a case study in a large central European catchment where hundreds of measurements of p,p'-DDT concentrations in air, soil, river water and sediment are available for the last two decades. The goal was to deliver an integrated estimation of τ soil by constraining a state-of-the-art hydrobiogeochemical-multimedia fate model of the catchment against the full body of empirical data available for this area. The INCA-Contaminants model was used for this scope. Good predictive performance against an (external) dataset of water and sediment concentrations was achieved with partitioning properties taken from the literature and τ soil estimates obtained from forcing the model against empirical historical data of p,p'-DDT in the catchment multicompartments. This approach allowed estimation of p,p'-DDT degradation in soil after taking adequate consideration of losses due to runoff and volatilization. Estimated τ soil ranged over 3000-3800 days. Degradation was the most important loss process, accounting on a yearly basis for more than 90% of the total dissipation. The total dissipation flux from the catchment soils was one order of magnitude higher than the total current atmospheric input estimated from atmospheric concentrations, suggesting that the bulk of p,p'-DDT currently being remobilized or lost is essentially that accumulated over two decades ago. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. p53 constrains progression to anaplastic thyroid carcinoma in a Braf-mutant mouse model of papillary thyroid cancer

    Science.gov (United States)

    McFadden, David G.; Vernon, Amanda; Santiago, Philip M.; Martinez-McFaline, Raul; Bhutkar, Arjun; Crowley, Denise M.; McMahon, Martin; Sadow, Peter M.; Jacks, Tyler

    2014-01-01

    Anaplastic thyroid carcinoma (ATC) has among the worst prognoses of any solid malignancy. The low incidence of the disease has in part precluded systematic clinical trials and tissue collection, and there has been little progress in developing effective therapies. v-raf murine sarcoma viral oncogene homolog B (BRAF) and tumor protein p53 (TP53) mutations cooccur in a high proportion of ATCs, particularly those associated with a precursor papillary thyroid carcinoma (PTC). To develop an adult-onset model of BRAF-mutant ATC, we generated a thyroid-specific CreER transgenic mouse. We used a Cre-regulated BrafV600E mouse and a conditional Trp53 allelic series to demonstrate that p53 constrains progression from PTC to ATC. Gene expression and immunohistochemical analyses of murine tumors identified the cardinal features of human ATC including loss of differentiation, local invasion, distant metastasis, and rapid lethality. We used small-animal ultrasound imaging to monitor autochthonous tumors and showed that treatment with the selective BRAF inhibitor PLX4720 improved survival but did not lead to tumor regression or suppress signaling through the MAPK pathway. The combination of PLX4720 and the mapk/Erk kinase (MEK) inhibitor PD0325901 more completely suppressed MAPK pathway activation in mouse and human ATC cell lines and improved the structural response and survival of ATC-bearing animals. This model expands the limited repertoire of autochthonous models of clinically aggressive thyroid cancer, and these data suggest that small-molecule MAPK pathway inhibitors hold clinical promise in the treatment of advanced thyroid carcinoma. PMID:24711431

  14. Systematic Constraint Selection Strategy for Rate-Controlled Constrained-Equilibrium Modeling of Complex Nonequilibrium Chemical Kinetics

    Science.gov (United States)

    Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad

    2018-04-01

    Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.

  15. Parametrization consequences of constraining soil organic matter models by total carbon and radiocarbon using long-term field data

    Science.gov (United States)

    Menichetti, Lorenzo; Kätterer, Thomas; Leifeld, Jens

    2016-05-01

    Soil organic carbon (SOC) dynamics result from different interacting processes and controls on spatial scales from sub-aggregate to pedon to the whole ecosystem. These complex dynamics are translated into models as abundant degrees of freedom. This high number of not directly measurable variables and, on the other hand, very limited data at disposal result in equifinality and parameter uncertainty. Carbon radioisotope measurements are a proxy for SOC age both at annual to decadal (bomb peak based) and centennial to millennial timescales (radio decay based), and thus can be used in addition to total organic C for constraining SOC models. By considering this additional information, uncertainties in model structure and parameters may be reduced. To test this hypothesis we studied SOC dynamics and their defining kinetic parameters in the Zürich Organic Fertilization Experiment (ZOFE) experiment, a > 60-year-old controlled cropland experiment in Switzerland, by utilizing SOC and SO14C time series. To represent different processes we applied five model structures, all stemming from a simple mother model (Introductory Carbon Balance Model - ICBM): (I) two decomposing pools, (II) an inert pool added, (III) three decomposing pools, (IV) two decomposing pools with a substrate control feedback on decomposition, (V) as IV but with also an inert pool. These structures were extended to explicitly represent total SOC and 14C pools. The use of different model structures allowed us to explore model structural uncertainty and the impact of 14C on kinetic parameters. We considered parameter uncertainty by calibrating in a formal Bayesian framework. By varying the relative importance of total SOC and SO14C data in the calibration, we could quantify the effect of the information from these two data streams on estimated model parameters. The weighing of the two data streams was crucial for determining model outcomes, and we suggest including it in future modeling efforts whenever SO14C

  16. Dynamics of the oil transition: Modeling capacity, depletion, and emissions

    International Nuclear Information System (INIS)

    Brandt, Adam R.; Plevin, Richard J.; Farrell, Alexander E.

    2010-01-01

    The global petroleum system is undergoing a shift to substitutes for conventional petroleum (SCPs). The Regional Optimization Model for Emissions from Oil Substitutes, or ROMEO, models this oil transition and its greenhouse gas impacts. ROMEO models the global liquid fuel market in an economic optimization framework, but in contrast to other models it solves each model year sequentially, with investment and production optimized under uncertainty about future prevailing prices or resource quantities. ROMEO includes more hydrocarbon resource types than integrated assessment models of climate change. ROMEO also includes the carbon intensities and costs of production of these resources. We use ROMEO to explore the uncertainty of future costs, emissions, and total fuel production under a number of scenarios. We perform sensitivity analysis on the endowment of conventional petroleum and future carbon taxes. Results show incremental emissions from production of oil substitutes of ∼ 0-30 gigatonnes (Gt) of carbon over the next 50 years (depending on the carbon tax). Also, demand reductions due to the higher cost of SCPs could reduce or eliminate these increases. Calculated emissions are highly sensitive to the endowment of conventional oil and less sensitive to a carbon tax.

  17. FORECASTING MODEL OF GHG EMISSION IN MANUFACTURING SECTORS OF THAILAND

    Directory of Open Access Journals (Sweden)

    Pruethsan Sutthichaimethee

    2017-01-01

    Full Text Available This study aims to analyze the modeling and forecasting the GHG emission of energy consumption in manufacturing sectors. The scope of the study is to analysis energy consumption and forecasting GHG emission of energy consumption for the next 10 years (2016-2025 and 25 years (2016-2040 by using ARIMAX model from the Input-output table of Thailand. The result shows that iron and steel has the highest value of energy consumption and followed by cement, fluorite, air transport, road freight transport, hotels and places of loading, coal and lignite, petrochemical products, other manufacturing, road passenger transport, respectively. The prediction results show that these models are effective in forecasting by measured by using RMSE, MAE, and MAPE. The results forecast of each model is as follows: 1 Model 1(2,1,1 shows that GHG emission will be increasing steadily and increasing at 25.17% by the year 2025 in comparison to 2016. 2 Model 2 (2,1,2 shows that GHG emission will be rising steadily and increasing at 41.51% by the year 2040 in comparison to 2016.

  18. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  19. Kinetic modeling in pre-clinical positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kuntner, Claudia [AIT Austrian Institute of Technology GmbH, Seibersdorf (Austria). Biomedical Systems, Health and Environment Dept.

    2014-07-01

    Pre-clinical positron emission tomography (PET) has evolved in the last few years from pure visualization of radiotracer uptake and distribution towards quantification of the physiological parameters. For reliable and reproducible quantification the kinetic modeling methods used to obtain relevant parameters of radiotracer tissue interaction are important. Here we present different kinetic modeling techniques with a focus on compartmental models including plasma input models and reference tissue input models. The experimental challenges of deriving the plasma input function in rodents and the effect of anesthesia are discussed. Finally, in vivo application of kinetic modeling in various areas of pre-clinical research is presented and compared to human data.

  20. Optimization of Fuel Consumption and Emissions for Auxiliary Power Unit Based on Multi-Objective Optimization Model

    Directory of Open Access Journals (Sweden)

    Yongpeng Shen

    2016-02-01

    Full Text Available Auxiliary power units (APUs are widely used for electric power generation in various types of electric vehicles, improvements in fuel economy and emissions of these vehicles directly depend on the operating point of the APUs. In order to balance the conflicting goals of fuel consumption and emissions reduction in the process of operating point choice, the APU operating point optimization problem is formulated as a constrained multi-objective optimization problem (CMOP firstly. The four competing objectives of this CMOP are fuel-electricity conversion cost, hydrocarbon (HC emissions, carbon monoxide (CO emissions and nitric oxide (NO x emissions. Then, the multi-objective particle swarm optimization (MOPSO algorithm and weighted metric decision making method are employed to solve the APU operating point multi-objective optimization model. Finally, bench experiments under New European driving cycle (NEDC, Federal test procedure (FTP and high way fuel economy test (HWFET driving cycles show that, compared with the results of the traditional fuel consumption single-objective optimization approach, the proposed multi-objective optimization approach shows significant improvements in emissions performance, at the expense of a slight drop in fuel efficiency.

  1. A model for radio emission from solar coronal shocks

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, G. Q.; Chen, L.; Wu, D. J., E-mail: djwu@pmo.ac.cn [Purple Mountain Observatory, CAS, Nanjing 210008 (China)

    2014-05-01

    Solar coronal shocks are very common phenomena in the solar atmosphere and are believed to be the drivers of solar type II radio bursts. However, the microphysical nature of these emissions is still an open question. This paper proposes that electron cyclotron maser (ECM) emission is responsible for the generation of radiation from the coronal shocks. In the present model, an energetic ion beam accelerated by the shock first excites the Alfvén wave (AW), then the excited AW leads to the formation of a density-depleted duct along the foreshock boundary of the shock. In this density-depleted duct, the energetic electron beam produced via the shock acceleration can effectively excite radio emission by ECM instability. Our results show that this model may potentially be applied to solar type II radio bursts.

  2. A model for radio emission from solar coronal shocks

    International Nuclear Information System (INIS)

    Zhao, G. Q.; Chen, L.; Wu, D. J.

    2014-01-01

    Solar coronal shocks are very common phenomena in the solar atmosphere and are believed to be the drivers of solar type II radio bursts. However, the microphysical nature of these emissions is still an open question. This paper proposes that electron cyclotron maser (ECM) emission is responsible for the generation of radiation from the coronal shocks. In the present model, an energetic ion beam accelerated by the shock first excites the Alfvén wave (AW), then the excited AW leads to the formation of a density-depleted duct along the foreshock boundary of the shock. In this density-depleted duct, the energetic electron beam produced via the shock acceleration can effectively excite radio emission by ECM instability. Our results show that this model may potentially be applied to solar type II radio bursts.

  3. Towards an Integrated Assessment Model for Tropospheric Ozone-Emission Inventories, Scenarios and Emission-control Options

    OpenAIRE

    Olsthoorn, X.

    1994-01-01

    IIASA intends to extend its RAINS model for addressing the issue of transboundary ozone air pollution. This requires the development of a VOC-emissions module, VOCs being precursors in ozone formation. The module should contain a Europe-wide emission inventory, a submodule for developing emission scenarios and a database of measures for VOC-emission control, including data about control effectiveness and control costs. It is recommended to use the forthcoming CORINAIR90 inventory for construc...

  4. Modeling methane emission from rice paddies with various agricultural practices

    Science.gov (United States)

    Huang, Yao; Zhang, Wen; Zheng, Xunhua; Li, Jin; Yu, Yongqiang

    2004-04-01

    Several models have been developed over the past decade to estimate CH4 emission from rice paddies. However, few models have been validated against field measurements with various parameters of soil, climate and agricultural practice. Thus reliability of the model's performance remains questionable particularly when extrapolating the model from site microscale to regional scale. In this paper, modification to the original model focuses on the effect of water regime on CH4 production/emission and the CH4 transport via bubbles. The modified model, named as CH4MOD, was then validated against a total of 94 field observations. These observations covered main rice cultivation regions from northern (Beijing, 40°30'N, 116°25'E) to southern China (Guangzhou, 23°08'N, 113°20'E), and from eastern (Hangzhou, 30°19'N, 120°12'E) to southwestern (Tuzu, 29°40'N, 103°50'E) China. Both single rice and double rice cultivations are distributed in these regions with different irrigation patterns and various types of organic matter incorporation. The observed seasonal amount of CH4 emission ranged from 3.1 to 761.7 kg C ha-1 with an average of 199.4 ± 187.3 kg C ha-1. In consonance with the observations, model simulations resulted in an average value of 224.6 ± 187.0 kg C ha-1, ranging from 13.9 to 824.3 kg C ha-1. Comparison between the computed and the observed seasonal CH4 emission yielded a correlation coefficient r2 of 0.84 with a slope of 0.92 and an intercept of 41.1 (n = 94, p < 0.001). It was concluded that the CH4MOD can reasonably simulate CH4 emissions from irrigated rice fields with a minimal number of inputs and parameters.

  5. Modelling nitrous oxide emissions from grazed grassland systems

    International Nuclear Information System (INIS)

    Wang Junye; Cardenas, Laura M.; Misselbrook, Tom H.; Cuttle, Steve; Thorman, Rachel E.; Li Changsheng

    2012-01-01

    Grazed grassland systems are an important component of the global carbon cycle and also influence global climate change through their emissions of nitrous oxide and methane. However, there are huge uncertainties and challenges in the development and parameterisation of process-based models for grazed grassland systems because of the wide diversity of vegetation and impacts of grazing animals. A process-based biogeochemistry model, DeNitrification-DeComposition (DNDC), has been modified to describe N 2 O emissions for the UK from regional conditions. This paper reports a new development of UK-DNDC in which the animal grazing practices were modified to track their contributions to the soil nitrogen (N) biogeochemistry. The new version of UK-DNDC was tested against datasets of N 2 O fluxes measured at three contrasting field sites. The results showed that the responses of the model to changes in grazing parameters were generally in agreement with observations, showing that N 2 O emissions increased as the grazing intensity increased. - Highlights: ► Parameterisation of grazing system using grazing intensity. ► Modification of UK D NDC for the UK soil and weather conditions. ► Validation of the UK D NDC against measured data of N 2 O emissions in three UK sites. ► Estimating influence of animal grazing practises on N 2 O emissions. - Grazing system was parameterised using grazing intensity and UK-DNDC model was modified and validated against measured data of N 2 O emissions in three UK sites.

  6. NUMERICAL PREDICTION MODELS FOR AIR POLLUTION BY MOTOR VEHICLE EMISSIONS

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-12-01

    Full Text Available Purpose. Scientific work involves: 1 development of 3D numerical models that allow calculating the process of air pollution by motor vehicles emissions; 2 creation of models which would allow predicting the air pollution level in urban areas. Methodology. To solve the problem upon assessing the level of air pollution by motor vehicles emissions fundamental equations of aerodynamics and mass transfer are used. For the solution of differential equations of aerodynamics and mass transfer finite-difference methods are used. For the numerical integration of the equation for the velocity potential the method of conditional approximations is applied. The equation for the velocity potential written in differential form, splits into two equations, where at each step of splitting an unknown value of the velocity potential is determined by an explicit scheme of running computation, while the difference scheme is implicit one. For the numerical integration of the emissions dispersion equation in the atmosphere applies the implicit alternating-triangular difference scheme of splitting. Emissions from the road are modeled by a series of point sources of given intensity. Developed numerical models form is the basis of the created software package. Findings. 3D numerical models were developed; they belong to the class of «diagnostic models». These models take into account main physical factors that influence the process of dispersion of harmful substances in the atmosphere when emissions from vehicles in the city occur. Based on the constructed numerical models the computational experiment was conducted to assess the level of air pollution in the street. Originality. Authors have developed numerical models that allow to calculate the 3D aerodynamics of the wind flow in urban areas and the process of mass transfer emissions from the highway. Calculations to determine the area of contamination, which is formed near the buildings, located along the highway were

  7. Interaction between combustion and turbulence in modelling of emissions

    International Nuclear Information System (INIS)

    Oksanen, A.; Maeki-Mantila, E.

    1995-01-01

    The aim of the work is to study the combustion models which are taking into account the coupling between gas phase chemistry and turbulence in the modelling of emissions, especially of nitric oxide, when temperature and species concentrating are fluctuating by turbulence. The principal tools to model turbulent gas phase combustion are the probability density function (pdf) and the other models which are taking into consideration the effect of turbulence on the chemical reactions in flames. Such other models to use in the modelling are many e.g. Eddy Dissipation Model (EDM), Eddy Dissipation Concept (EDC), Eddy Dissipation Kinetic model (EDK), Eddy Break Up model (EBU), kinetic models and the combinations of those ones, respectively. Besides these models the effect of the different turbulence models on the formation of emissions will be also studied. Same kind of modelling has been done also by the teams in the Special Interest Group of ERCOFTAC (European Research Community On Flow Turbulence And Combustion) under the name of Aerodynamics and Steady State Combustion Chambers and Furnaces (A.S.C.F.). Combustion measurements are also tried to do if only the practical conditions take it possible. (author)

  8. Mathematical modeling of three-dimensional images in emission tomography

    International Nuclear Information System (INIS)

    Koblik, Yu.N.; Khugaev, A. V.; Mktchyan, G.A.; Ioannou, P.; Dimovasili, E.

    2002-01-01

    The model of processing results of three-dimensional measurements in positron-emissive tomograph is proposed in this work. The algorithm of construction and visualization of phantom objects of arbitrary shape was developed and its concrete realization in view of program packet for PC was carried out

  9. Modeling of Particle Emission During Dry Orthogonal Cutting

    Science.gov (United States)

    Khettabi, Riad; Songmene, Victor; Zaghbani, Imed; Masounave, Jacques

    2010-08-01

    Because of the risks associated with exposure to metallic particles, efforts are being put into controlling and reducing them during the metal working process. Recent studies by the authors involved in this project have presented the effects of cutting speeds, workpiece material, and tool geometry on particle emission during dry machining; the authors have also proposed a new parameter, named the dust unit ( D u), for use in evaluating the quantity of particle emissions relative to the quantity of chips produced during a machining operation. In this study, a model for predicting the particle emission (dust unit) during orthogonal turning is proposed. This model, which is based on the energy approach combined with the microfriction and the plastic deformation of the material, takes into account the tool geometry, the properties of the worked material, the cutting conditions, and the chip segmentation. The model is validated using experimental results obtained during the orthogonal turning of 6061-T6 aluminum alloy, AISI 1018, AISI 4140 steels, and grey cast iron. A good agreement was found with experimental results. This model can help in designing strategies for reducing particle emission during machining processes, at the source.

  10. Evaluating terrestrial water storage variations from regionally constrained GRACE mascon data and hydrological models over Southern Africa – Preliminary results

    DEFF Research Database (Denmark)

    Krogh, Pernille Engelbredt; Andersen, Ole Baltazar; Michailovsky, Claire Irene B.

    2010-01-01

    ). In this paper we explore an experimental set of regionally constrained mascon blocks over Southern Africa where a system of 1.25° × 1.5° and 1.5° × 1.5° blocks has been designed. The blocks are divided into hydrological regions based on drainage patterns of the largest river basins, and are constrained...... Malawi with water level from altimetry. Results show that weak constraints across regions in addition to intra-regional constraints are necessary, to reach reasonable mass variations....

  11. Predicting the emission from an incineration plant - a modelling approach

    International Nuclear Information System (INIS)

    Rohyiza Baan

    2004-01-01

    The emissions from combustion process of Municipal Solid Waste (MSW) have become an important issue in incineration technology. Resulting from unstable combustion conditions, the formation of undesirable compounds such as CO, SO 2 , NO x , PM 10 and dioxin become the source of pollution concentration in the atmosphere. The impact of emissions on criteria air pollutant concentrations could be obtained directly using ambient air monitoring equipment or predicted using dispersion modelling. Literature shows that the complicated atmospheric processes that occur in nature can be described using mathematical models. This paper will highlight the air dispersion model as a tool to relate and simulate the release and dispersion of air pollutants in the atmosphere. The technique is based on a programming approach to develop the air dispersion ground level concentration model with the use of Gaussian and Pasquil equation. This model is useful to study the consequences of various sources of air pollutant and estimating the amount of pollutants released into the air from existing emission sources. From this model, it was found that the difference in percentage of data between actual conditions and the model's prediction is about 5%. (Author)

  12. Methane emissions from rice paddies. Experiments and modelling

    International Nuclear Information System (INIS)

    Van Bodegom, P.M.

    2000-01-01

    This thesis describes model development and experimentation on the comprehension and prediction of methane (CH4) emissions from rice paddies. The large spatial and temporal variability in CH4 emissions and the dynamic non-linear relationships between processes underlying CH4 emissions impairs the applicability of empirical relations. Mechanistic concepts are therefore starting point of analysis throughout the thesis. The process of CH4 production was investigated by soil slurry incubation experiments at different temperatures and with additions of different electron donors and acceptors. Temperature influenced conversion rates and the competitiveness of microorganisms. The experiments were used to calibrate and validate a mechanistic model on CH4 production that describes competition for acetate and H2/CO2, inhibition effects and chemolithotrophic reactions. The redox sequence leading eventually to CH4 production was well predicted by the model, calibrating only the maximum conversion rates. Gas transport through paddy soil and rice plants was quantified by experiments in which the transport of SF6 was monitored continuously by photoacoustics. A mechanistic model on gas transport in a flooded rice system based on diffusion equations was validated by these experiments and could explain why most gases are released via plant mediated transport. Variability in root distribution led to highly variable gas transport. Experiments showed that CH4 oxidation in the rice rhizosphere was oxygen (O2) limited. Rice rhizospheric O2 consumption was dominated by chemical iron oxidation, and heterotrophic and methanotrophic respiration. The most abundant methanotrophs and heterotrophs were isolated and kinetically characterised. Based upon these experiments it was hypothesised that CH4 oxidation mainly occurred at microaerophilic, low acetate conditions not very close to the root surface. A mechanistic rhizosphere model that combined production and consumption of O2, carbon and iron

  13. Prediction/modelling of the neutron emission from JET discharges

    Energy Technology Data Exchange (ETDEWEB)

    Jarvis, O.N. [EURATOM-UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire (United Kingdom); Conroy, S. [INF, Uppsala University, EURATOM-VR, Uppsala (Sweden)

    2002-08-01

    The neutron emission from the JET tokamak is investigated using an extensive set of diagnostics, permitting the instantaneous neutron yield, the radial profile of the neutron emission and neutron energy spectra to be studied. Apart from their importance as an immediate indication of plasma fusion performance, the customary use for neutron measurements is as a test of the internal consistency of the non-neutron diagnostic data, from which the expected neutron production can be predicted. However, because contours of equal neutron emissivity are not necessarily coincident with magnetic flux surfaces, a fully satisfactory numerical analysis requires the application of highly complex transport codes such as TRANSP. In this paper, a far simpler approach is adopted wherein the neutron emission spatial profiles are used to define the plasma geometry. A two-volume model is used, with a core volume that encompasses about (2/3) of the neutron emission and the peripheral volume the remainder. The overall approach provides an interpretation of the measured neutron data, for both deuterium and deuterium-tritium (D-T) plasma discharges, that are as accurate as the basic non-nuclear plasma data warrant. The model includes the empirical assumption that particles, along with their energies and momenta, are transported macroscopically in accordance with classical conservation laws. This first-order estimate of cross-field transport (which, for D-T plasmas, determines the D : T fuel concentration ratio in the plasma core) is fine-tuned to reproduce the experimental ion and electron temperature data. The success of this model demonstrates that the observed plasma rotation rates, temperatures and the resulting neutron emission can be broadly explained in terms of macroscopic transport. (author)

  14. Electronic field emission models beyond the Fowler-Nordheim one

    Science.gov (United States)

    Lepetit, Bruno

    2017-12-01

    We propose several quantum mechanical models to describe electronic field emission from first principles. These models allow us to correlate quantitatively the electronic emission current with the electrode surface details at the atomic scale. They all rely on electronic potential energy surfaces obtained from three dimensional density functional theory calculations. They differ by the various quantum mechanical methods (exact or perturbative, time dependent or time independent), which are used to describe tunneling through the electronic potential energy barrier. Comparison of these models between them and with the standard Fowler-Nordheim one in the context of one dimensional tunneling allows us to assess the impact on the accuracy of the computed current of the approximations made in each model. Among these methods, the time dependent perturbative one provides a well-balanced trade-off between accuracy and computational cost.

  15. Model of opacity and emissivity of non-equilibrium plasma

    International Nuclear Information System (INIS)

    Politov V Y

    2008-01-01

    In this work the model describing absorption and emission properties of the non-equilibrium plasma is presented. It is based on the kinetics equations for populations of the ground, singly and doubly excited states of multi-charged ions. After solving these equations, the states populations together with the spectroscopic data, supplied in the special database for a lot ionization stages, are used for building the spectral distributions of plasma opacity and emissivity in STA approximation. Results of kinetics simulation are performed for such important X-ray converter as gold, which is investigated intensively in ICF-experiments

  16. Modelling of pesticide emissions for Life Cycle Inventory analysis: Model development, applications and implications

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes

    with variations in the climates and soils present in Europe. Emissions of pesticides to surface water and groundwater calculated by PestLCI 2.0 were compared with models used for risk assessment. Compared to the MACRO module in SWASH 3.1 model, which calculates surface water emissions by runoff and drainage...... chromatographic flow of water through the soil), which was attributed to the omission of emissions via macropore flow in the latter model. The comparison was complicated by the fact that the scenarios used were not fully identical. In order to quantify the implications of using PestLCI 2.0, human toxicity......The work presented in this thesis deals with quantification of pesticide emissions in the Life Cycle Inventory (LCI) analysis phase of Life Cycle Assessment (LCA). The motivation to model pesticide emissions is that reliable LCA results not only depend on accurate impact assessment models, but also...

  17. Modelling nitrous oxide emissions from cropland at the regional scale

    Directory of Open Access Journals (Sweden)

    Gabrielle Benoît

    2006-11-01

    Full Text Available Arable soils are a large source of nitrous oxide (N2O emissions, making up half of the biogenic emissions worldwide. Estimating their source strength requires methods capable of capturing the spatial and temporal variability of N2O emissions, along with the effects of crop management. Here, we applied a process-based model, CERES, with geo-referenced input data on soils, weather, and land use to map N2O emissions from wheat-cropped soils in three agriculturally intensive regions in France. Emissions were mostly controlled by soil type and local climate conditions, and only to a minor extent by the doses of fertilizer nitrogen applied. As a result, the direct emission factors calculated at the regional level were much smaller (ranging from 0.0007 to 0.0033 kg N2O-N kg–1 N than the value of 0.0125 kg N2O-N kg–1 N currently recommended in the IPCC Tier 1 methodology. Regional emissions were far more sensitive to the soil microbiological parameter s governing denitrification and its fraction evolved as N2O, soil bulk density, and soil initial inorganic N content. Mitigation measures should therefore target a reduction in the amount of soil inorganic N upon sowing of winter crops, and a decrease of the soil N2O production potential itself. From a general perspective, taking into account the spatial variability of soils and climate thereby appears necessary to improve the accuracy of national inventories, and to tailor mitigation strategies to regional characteristics. The methodology and results presented here may easily be transferred to winter oilseed rape, whose has growing cycle and fertilser requirements are similar.

  18. Grey forecasting model for CO2 emissions: A Taiwan study

    International Nuclear Information System (INIS)

    Lin, Chiun-Sin; Liou, Fen-May; Huang, Chih-Pin

    2011-01-01

    Highlights: → CO 2 is the most frequently implicated in global warming. → The CARMA indicates that the Taichung coal-fired power plants had the highest CO 2 emissions in the world. → GM(1,1) prediction accuracy is fairly high. → The results show that the average residual error of the GM(1,1) was below 10%. -- Abstract: Among the various greenhouse gases associated with climate change, CO 2 is the most frequently implicated in global warming. The latest data from Carbon Monitoring for Action (CARMA) shows that the coal-fired power plant in Taichung, Taiwan emitted 39.7 million tons of CO 2 in 2007 - the highest of any power plant in the world. Based on statistics from Energy International Administration, the annual CO 2 emissions in Taiwan have increased 42% from 1997 until 2006. Taiwan has limited natural resources and relies heavily on imports to meet its energy needs, and the government must take serious measures control energy consumption to reduce CO 2 emissions. Because the latest data was from 2009, this study applied the grey forecasting model to estimate future CO 2 emissions in Taiwan from 2010 until 2012. Forecasts of CO 2 emissions in this study show that the average residual error of the GM(1,1) was below 10%. Overall, the GM(1,1) predicted further increases in CO 2 emissions over the next 3 years. Although Taiwan is not a member of the United Nations and is not bound by the Kyoto Protocol, the findings of this study provide a valuable reference with which the Taiwanese government could formulate measures to reduce CO 2 emissions by curbing the unnecessary the consumption of energy.

  19. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  20. Constraining the Timing of Lobate Debris Apron Emplacement at Martian Mid-Latitudes Using a Numerical Model of Ice Flow

    Science.gov (United States)

    Parsons, R. A.; Nimmo, F.

    2010-03-01

    SHARAD observations constrain the thickness and dust content of lobate debris aprons (LDAs). Simulations of dust-free ice-sheet flow over a flat surface at 205 K for 10-100 m.y. give LDA lengths and thicknesses that are consistent with observations.

  1. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  2. A model for energy pricing with stochastic emission costs

    International Nuclear Information System (INIS)

    Elliott, Robert J.; Lyle, Matthew R.; Miao, Hong

    2010-01-01

    We use a supply-demand approach to value energy products exposed to emission cost uncertainty. We find closed form solutions for a number of popularly traded energy derivatives such as: forwards, European call options written on spot prices and European Call options written on forward contracts. Our modeling approach is to first construct noisy supply and demand processes and then equate them to find an equilibrium price. This approach is very general while still allowing for sensitivity analysis within a valuation setting. Our assumption is that, in the presence of emission costs, traditional supply growth will slow down causing output prices of energy products to become more costly over time. However, emission costs do not immediately cause output price appreciation, but instead expose individual projects, particularly those with high emission outputs, to much more extreme risks through the cost side of their profit stream. Our results have implications for hedging and pricing for producers operating in areas facing a stochastic emission cost environment. (author)

  3. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  4. A statistical model for field emission in superconducting cavities

    International Nuclear Information System (INIS)

    Padamsee, H.; Green, K.; Jost, W.; Wright, B.

    1993-01-01

    A statistical model is used to account for several features of performance of an ensemble of superconducting cavities. The input parameters are: the number of emitters/area, a distribution function for emitter β values, a distribution function for emissive areas, and a processing threshold. The power deposited by emitters is calculated from the field emission current and electron impact energy. The model can successfully account for the fraction of tests that reach the maximum field Epk in an ensemble of cavities, for eg, 1-cells at sign 3 GHz or 5-cells at sign 1.5 GHz. The model is used to predict the level of power needed to successfully process cavities of various surface areas with high pulsed power processing (HPP)

  5. The Supercritical Pile Model: Prompt Emission Across the Electromagnetic Spectrum

    Science.gov (United States)

    Kazanas, Demos; Mastichiadis, A.

    2008-01-01

    The "Supercritical Pile" GRB model is an economical model that provides the dissipation necessary to convert explosively the energy stored in relativistic protons in the blast wave of a GRB into radiation; at the same time it produces spectra whose luminosity peaks at 1 MeV in the lab frame, the result of the kinematics of the proton-photon - pair production reaction that effects the conversion of proton energy to radiation. We outline the fundamental notions behind the "Supercritical Pile" model and discuss the resulting spectra of the prompt emission from optical to gamma-ray energies of order Gamma^2 m_ec^2, (Gamma is the Lorentz factor of the blast wave) present even in the absence of an accelerated particle distribution and compare our results to bursts that cover this entire energy range. Particular emphasis is given on the emission at the GLAST energy range both in the prompt and the afterglow stages of the burst.

  6. An Architecturally Constrained Model of Random Number Generation and its Application to Modelling the Effect of Generation Rate

    Directory of Open Access Journals (Sweden)

    Nicholas J. Sexton

    2014-07-01

    Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.

  7. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    Science.gov (United States)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  8. Development of an emissions inventory model for mobile sources

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A W; Broderick, B M [Trinity College, Dublin (Ireland). Dept. of Civil, Structural and Environmental Engineering

    2000-07-01

    Traffic represents one of the largest sources of primary air pollutants in urban areas. As a consequence, numerous abatement strategies are being pursued to decrease the ambient concentrations of a wide range of pollutants. A mutual characteristic of most of these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emissions inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for a wide range of vehicle types. The majority of inventories are compiled using 'passive' data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. Current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this paper. a methodology for estimating emissions from mobile sources using real-time data is described. This methodology is used to calculate emissions of sulphur dioxide (SO{sub 2}), oxides of nitrogen (NO{sub x}), carbon monoxide (CO). volatile organic compounds (VOC), particulate matter less than 10 {mu}m aerodynamic diameter (PM{sub 10}), 1,3-butadiene (C{sub 4}H{sub 6}) and benzene (C{sub 6}H{sub 6}) at a test junction in Dublin. Traffic data, which are required on a street-by-street basis, is obtained from induction loops and closed circuit televisions (CCTV) as well as statistical data. The observed traffic data are compared to simulated data from a travel demand model. As a test case, an emissions inventory is compiled for a heavily trafficked signalized junction in an urban environment using the measured data. In order that the model may be validated, the predicted emissions are employed in a dispersion model along with local meteorological conditions and site geometry. The resultant pollutant concentrations are compared to average ambient kerbside conditions

  9. Interaction between combustion and turbulence in modelling of emissions

    International Nuclear Information System (INIS)

    Oksanen, A.; Maeki-Mantila, E.

    1996-01-01

    The aim of the work was to study the combustion models taking into account the coupling between gas phase reactions and turbulence the modelling of emissions, especially of nitric oxide, when temperature and species concentrations are fluctuating by turbulence. The principal tools to model turbulent gas phase combustion were methods based on the probability density function (pdf) with β and γ-distributions the practice of which can take into consideration the stochastic nature of turbulence and, on the other hand, the models which also include the effect turbulence on the reaction rates in the flames e.g. the Eddy Dissipation Model (EDM), the Eddy Dissipation Concept (EDC), the kinetic mod and the combinations of those ones, respectively. Besides these models effect of the different turbulence models (standard, RNG and CHENKIM k-ε models) on the combustion phenomena, especially on the formation emissions was also studied. Same kind of modelling has been done by the teams in the Special Interest Group of ERCOFTAC (European Research Community On Flow Turbulence And Combustion) under the title of Aerodynamics and Steady State Combustion Chambers and Furnaces (A.S.C.F.) with which we have co-operated during some years with success. (author)

  10. Hybrid Active/Passive Control of Sound Radiation from Panels with Constrained Layer Damping and Model Predictive Feedback Control

    Science.gov (United States)

    Cabell, Randolph H.; Gibbs, Gary P.

    2000-01-01

    make the controller adaptive. For example, a mathematical model of the plant could be periodically updated as the plant changes, and the feedback gains recomputed from the updated model. To be practical, this approach requires a simple plant model that can be updated quickly with reasonable computational requirements. A recent paper by the authors discussed one way to simplify a feedback controller, by reducing the number of actuators and sensors needed for good performance. The work was done on a tensioned aircraft-style panel excited on one side by TBL flow in a low speed wind tunnel. Actuation was provided by a piezoelectric (PZT) actuator mounted on the center of the panel. For sensing, the responses of four accelerometers, positioned to approximate the response of the first radiation mode of the panel, were summed and fed back through the controller. This single input-single output topology was found to have nearly the same noise reduction performance as a controller with fifteen accelerometers and three PZT patches. This paper extends the previous results by looking at how constrained layer damping (CLD) on a panel can be used to enhance the performance of the feedback controller thus providing a more robust and efficient hybrid active/passive system. The eventual goal is to use the CLD to reduce sound radiation at high frequencies, then implement a very simple, reduced order, low sample rate adaptive controller to attenuate sound radiation at low frequencies. Additionally this added damping smoothes phase transitions over the bandwidth which promotes robustness to natural frequency shifts. Experiments were conducted in a transmission loss facility on a clamped-clamped aluminum panel driven on one side by a loudspeaker. A generalized predictive control (GPC) algorithm, which is suited to online adaptation of its parameters, was used in single input-single output and multiple input-single output configurations. Because this was a preliminary look at the potential

  11. Modeling the Land Use/Cover Change in an Arid Region Oasis City Constrained by Water Resource and Environmental Policy Change using Cellular Automata Model

    Science.gov (United States)

    Hu, X.; Li, X.; Lu, L.

    2017-12-01

    Land use/cover change (LUCC) is an important subject in the research of global environmental change and sustainable development, while spatial simulation on land use/cover change is one of the key content of LUCC and is also difficult due to the complexity of the system. The cellular automata (CA) model had an irreplaceable role in simulating of land use/cover change process due to the powerful spatial computing power. However, the majority of current CA land use/cover models were binary-state model that could not provide more general information about the overall spatial pattern of land use/cover change. Here, a multi-state logistic-regression-based Markov cellular automata (MLRMCA) model and a multi-state artificial-neural-network-based Markov cellular automata (MANNMCA) model were developed and were used to simulate complex land use/cover evolutionary process in an arid region oasis city constrained by water resource and environmental policy change, the Zhangye city during the period of 1990-2010. The results indicated that the MANNMCA model was superior to MLRMCA model in simulated accuracy. These indicated that by combining the artificial neural network with CA could more effectively capture the complex relationships between the land use/cover change and a set of spatial variables. Although the MLRMCA model were also some advantages, the MANNMCA model was more appropriate for simulating complex land use/cover dynamics. The two proposed models were effective and reliable, and could reflect the spatial evolution of regional land use/cover changes. These have also potential implications for the impact assessment of water resources, ecological restoration, and the sustainable urban development in arid areas.

  12. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    Science.gov (United States)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions

  13. Modelling the flooding capacity of a Polish Carpathian river: A comparison of constrained and free channel conditions

    Science.gov (United States)

    Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna

    2016-11-01

    The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep

  14. Inversion of CO and NOx emissions using the adjoint of the IMAGES model

    Directory of Open Access Journals (Sweden)

    J.-F. Müller

    2005-01-01

    over Europe and Asia. Our inversion results have been evaluated against independent observations from aircraft campaigns. This comparison shows that the optimization of CO emissions constrained by both CO and NO2 observations leads to a better agreement between modelled and observed values, especially in the Tropics and the Southern Hemisphere, compared to the case where only CO observations are used. A posteriori estimation of errors on the control parameters shows that a significant error reduction is achieved for the majority of the anthropogenic source parameters, whereas biomass burning emissions are still subject to large errors after optimization. Nonetheless, the constraints provided by the GOME measurements allow to reduce the uncertainties on savanna burning emissions of both CO and NOx, suggesting thus that the incorporation of these data in the inversion yields more robust results for carbon monoxide.

  15. Development of a modal emissions model using data from the Cooperative Industry/Government Exhaust Emission test program

    Science.gov (United States)

    2003-06-22

    The Environmental Protection Agencys (EPAs) recommended model, MOBILE5a, has been : used extensively to predict emission factors based on average speeds for each fleet type. : Because average speeds are not appropriate in modeling intersections...

  16. Innovations in projecting emissions for air quality modeling ...

    Science.gov (United States)

    Air quality modeling is used in setting air quality standards and in evaluating their costs and benefits. Historically, modeling applications have projected emissions and the resulting air quality only 5 to 10 years into the future. Recognition that the choice of air quality management strategy has climate change implications is encouraging longer modeling time horizons. However, for multi-decadal time horizons, many questions about future conditions arise. For example, will current population, economic, and land use trends continue, or will we see shifts that may alter the spatial and temporal pattern of emissions? Similarly, will technologies such as building-integrated solar photovoltaics, battery storage, electric vehicles, and CO2 capture emerge as disruptive technologies - shifting how we produce and use energy - or will these technologies achieve only niche markets and have little impact? These are some of the questions that are being evaluated by researchers within the U.S. EPA’s Office of Research and Development. In this presentation, Dr. Loughlin will describe a range of analytical approaches that are being explored. These include: (i) the development of alternative scenarios of the future that can be used to evaluate candidate management strategies over wide-ranging conditions, (ii) the application of energy system models to project emissions decades into the future and to assess the environmental implications of new technologies, (iii) and methodo

  17. A prognostic pollen emissions model for climate models (PECM1.0

    Directory of Open Access Journals (Sweden)

    M. C. Wozniak

    2017-11-01

    Full Text Available We develop a prognostic model called Pollen Emissions for Climate Models (PECM for use within regional and global climate models to simulate pollen counts over the seasonal cycle based on geography, vegetation type, and meteorological parameters. Using modern surface pollen count data, empirical relationships between prior-year annual average temperature and pollen season start dates and end dates are developed for deciduous broadleaf trees (Acer, Alnus, Betula, Fraxinus, Morus, Platanus, Populus, Quercus, Ulmus, evergreen needleleaf trees (Cupressaceae, Pinaceae, grasses (Poaceae; C3, C4, and ragweed (Ambrosia. This regression model explains as much as 57 % of the variance in pollen phenological dates, and it is used to create a climate-flexible phenology that can be used to study the response of wind-driven pollen emissions to climate change. The emissions model is evaluated in the Regional Climate Model version 4 (RegCM4 over the continental United States by prescribing an emission potential from PECM and transporting pollen as aerosol tracers. We evaluate two different pollen emissions scenarios in the model using (1 a taxa-specific land cover database, phenology, and emission potential, and (2 a plant functional type (PFT land cover, phenology, and emission potential. The simulated surface pollen concentrations for both simulations are evaluated against observed surface pollen counts in five climatic subregions. Given prescribed pollen emissions, the RegCM4 simulates observed concentrations within an order of magnitude, although the performance of the simulations in any subregion is strongly related to the land cover representation and the number of observation sites used to create the empirical phenological relationship. The taxa-based model provides a better representation of the phenology of tree-based pollen counts than the PFT-based model; however, we note that the PFT-based version provides a useful and climate-flexible emissions

  18. Radio emission from symbiotic stars: a binary model

    International Nuclear Information System (INIS)

    Taylor, A.R.; Seaquist, E.R.

    1985-01-01

    The authors examine a binary model for symbiotic stars to account for their radio properties. The system is comprised of a cool, mass-losing star and a hot companion. Radio emission arises in the portion of the stellar wind photo-ionized by the hot star. Computer simulations for the case of uniform mass loss at constant velocity show that when less than half the wind is ionized, optically thick spectral indices greater than +0.6 are produced. Model fits to radio spectra allow the binary separation, wind density and ionizing photon luminosity to be calculated. They apply the model to the symbiotic star H1-36. (orig.)

  19. Bayesian modelling of the emission spectrum of the Joint European Torus Lithium Beam Emission Spectroscopy system.

    Science.gov (United States)

    Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C

    2016-02-01

    A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.

  20. Using finite element modelling to examine the flow process and temperature evolution in HPT under different constraining conditions

    International Nuclear Information System (INIS)

    Pereira, P H R; Langdon, T G; Figueiredo, R B; Cetlin, P R

    2014-01-01

    High-pressure torsion (HPT) is a metal-working technique used to impose severe plastic deformation into disc-shaped samples under high hydrostatic pressures. Different HPT facilities have been developed and they may be divided into three distinct categories depending upon the configuration of the anvils and the restriction imposed on the lateral flow of the samples. In the present paper, finite element simulations were performed to compare the flow process, temperature, strain and hydrostatic stress distributions under unconstrained, quasi-constrained and constrained conditions. It is shown there are distinct strain distributions in the samples depending on the facility configurations and a similar trend in the temperature rise of the HPT workpieces

  1. Modelling the ArH+ emission from the Crab nebula

    Science.gov (United States)

    Priestley, F. D.; Barlow, M. J.; Viti, S.

    2017-12-01

    We have performed combined photoionization and photodissociation region (PDR) modelling of a Crab nebula filament subjected to the synchrotron radiation from the central pulsar wind nebula, and to a high flux of charged particles; a greatly enhanced cosmic-ray ionization rate over the standard interstellar value, ζ0, is required to account for the lack of detected [C I] emission in published Herschel SPIRE FTS observations of the Crab nebula. The observed line surface brightness ratios of the OH+ and ArH+ transitions seen in the SPIRE FTS frequency range can only be explained with both a high cosmic-ray ionization rate and a reduced ArH+ dissociative recombination rate compared to that used by previous authors, although consistent with experimental upper limits. We find that the ArH+/OH+ line strengths and the observed H2 vibration-rotation emission can be reproduced by model filaments with nH = 2 × 104 cm-3, ζ = 107ζ0 and visual extinctions within the range found for dusty globules in the Crab nebula, although far-infrared emission from [O I] and [C II] is higher than the observational constraints. Models with nH = 1900 cm-3 underpredict the H2 surface brightness, but agree with the ArH+ and OH+ surface brightnesses and predict [O I] and [C II] line ratios consistent with observations. These models predict HeH+ rotational emission above detection thresholds, but consideration of the formation time-scale suggests that the abundance of this molecule in the Crab nebula should be lower than the equilibrium values obtained in our analysis.

  2. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  3. Inverse Modeling of Emissions and their Time Profiles

    Czech Academy of Sciences Publication Activity Database

    Resler, Jaroslav; Eben, Kryštof; Juruš, Pavel; Liczki, Jitka

    2010-01-01

    Roč. 1, č. 4 (2010), s. 288-295 ISSN 1309-1042 R&D Projects: GA MŽP SP/1A4/107/07 Grant - others:COST(XE) ES0602 Institutional research plan: CEZ:AV0Z10300504 Keywords : 4DVar * inverse modeling * diurnal time profile of emission * CMAQ adjoint * satellite observations Subject RIV: DG - Athmosphere Sciences, Meteorology

  4. Development of odorous gas model using municipal solid waste emission

    International Nuclear Information System (INIS)

    Mohd Nahar bin Othman; Muhd Noor Muhd Yunus; Ku Halim Ku Hamid

    2010-01-01

    The impact of ambient odour in the vicinity of the Semenyih MSW processing plant, commonly known as RDF plant, can be very negative to the nearby population, causing public restlessness and consequently affecting the business operation and sustainability of the plant. The precise source of the odour, types, emission level and the meteorological conditions are needed to predict and established the ambient odour level at the perimeter fence of the plant and address it with respect to the ambient standards. To develop the odour gas model for the purpose of treatment is very compulsory because in MSW odour it contain many component of chemical that contribute the smell. Upon modelling using an established package as well as site measurements, the odour level at the perimeter fence of the plant was deduced and found to be marginally high, above the normal ambient level. Based on this issue, a study was made to model odour using Ausplume Model. This paper will address and discuss the measurement of ambient gas odour, the dispersion modelling to establish the critical ambient emission level, as well as experimental validation using a simulated odour. The focus will be made on exploring the use of Ausplume modelling to develop the pattern of odour concentrations for various condition and times, as well as adapting the model for MSW odour controls. (author)

  5. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  6. Modelling lifestyle effects on energy demand and related emissions

    International Nuclear Information System (INIS)

    Weber, C.

    2000-01-01

    An approach to analyse and quantify the impact of lifestyle factors on current and future energy demand is developed. Thereby not only directly environmentally relevant consumer activities such as car use or heating have been analysed, but also expenditure patterns which induce environmental damage through the production of the consumed goods. The use of household survey data from the national statistical offices offers the possibility to cover this wide range of activities. For the available social-economic household characteristics a variety of different behavioural patterns have been observed. For evaluating the energy and emission consequences of the consumed goods enhanced input-output models are used. The additions implemented - a mixed monetary-energetic approach for inter-industry flows and a separate treatment of transport -related emissions - improve the reliability of the obtained results. The developed approach has been used for analysing current emissions profiles and distributions in West Germany, France and the Netherlands as well as scenarios for future energy demand and related emissions. It therefore provides a comprehensive methodology to analyse environmental effects in a consumer and citizen perspective and thus contributes to an increase transparency of complex economic and ecological interconnections. (author)

  7. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  8. AGILE Observations of the Gravitational-wave Source GW170817: Constraining Gamma-Ray Emission from an NS-NS Coalescence

    Science.gov (United States)

    Verrecchia, F.; Tavani, M.; Donnarumma, I.; Bulgarelli, A.; Evangelista, Y.; Pacciani, L.; Ursi, A.; Piano, G.; Pilia, M.; Cardillo, M.; Parmiggiani, N.; Giuliani, A.; Pittori, C.; Longo, F.; Lucarelli, F.; Minervini, G.; Feroci, M.; Argan, A.; Fuschino, F.; Labanti, C.; Marisaldi, M.; Fioretti, V.; Trois, A.; Del Monte, E.; Antonelli, L. A.; Barbiellini, G.; Caraveo, P.; Cattaneo, P. W.; Colafrancesco, S.; Costa, E.; D'Amico, F.; Ferrari, A.; Giommi, P.; Morselli, A.; Paoletti, F.; Pellizzoni, A.; Picozza, P.; Rappoldi, A.; Soffitta, P.; Vercellone, S.; Baroncelli, L.; Zollino, G.

    2017-12-01

    The LIGO-Virgo Collaboration (LVC) detected, on 2017 August 17, an exceptional gravitational-wave (GW) event temporally consistent within ˜ 1.7 {{s}} with the GRB 1708117A observed by Fermi-GBM and INTEGRAL. The event turns out to be compatible with a neutron star-neutron star (NS-NS) coalescence that subsequently produced a radio/optical/X-ray transient detected at later times. We report the main results of the observations by the AGILE satellite of the GW170817 localization region (LR) and its electromagnetic (EM) counterpart. At the LVC detection time T 0, the GW170817 LR was occulted by the Earth. The AGILE instrument collected useful data before and after the GW/GRB event because in its spinning observation mode it can scan a given source many times per hour. The earliest exposure of the GW170817 LR by the gamma-ray imaging detector started about 935 s after T 0. No significant X-ray or gamma-ray emission was detected from the LR that was repeatedly exposed over timescales of minutes, hours, and days before and after GW170817, also considering Mini-calorimeter and Super-AGILE data. Our measurements are among the earliest ones obtained by space satellites on GW170817 and provide useful constraints on the precursor and delayed emission properties of the NS-NS coalescence event. We can exclude with high confidence the existence of an X-ray/gamma-ray emitting magnetar-like object with a large magnetic field of {10}15 {{G}}. Our data are particularly significant during the early stage of evolution of the EM remnant.

  9. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  10. Modeling methane emissions by cattle production systems in Mexico

    Science.gov (United States)

    Castelan-Ortega, O. A.; Ku Vera, J.; Molina, L. T.

    2013-12-01

    Methane emissions from livestock is one of the largest sources of methane in Mexico. The purpose of the present paper is to provide a realistic estimate of the national inventory of methane produced by the enteric fermentation of cattle, based on an integrated simulation model, and to provide estimates of CH4 produced by cattle fed typical diets from the tropical and temperate climates of Mexico. The Mexican cattle population of 23.3 million heads was divided in two groups. The first group (7.8 million heads), represents cattle of the tropical climate regions. The second group (15.5 million heads), are the cattle in the temperate climate regions. This approach allows incorporating the effect of diet on CH4 production into the analysis because the quality of forages is lower in the tropics than in temperate regions. Cattle population in every group was subdivided into two categories: cows (COW) and other type of cattle (OTHE), which included calves, heifers, steers and bulls. The daily CH4 production by each category of animal along an average production cycle of 365 days was simulated, instead of using a default emission factor as in Tier 1 approach. Daily milk yield, live weight changes associated with the lactation, and dry matter intake, were simulated for the entire production cycle. The Moe and Tyrrell (1979) model was used to simulate CH4 production for the COW category, the linear model of Mills et al. (2003) for the OTHE category in temperate regions and the Kurihara et al. (1999) model for the OTHE category in the tropical regions as it has been developed for cattle fed tropical diets. All models were integrated with a cow submodel to form an Integrated Simulation Model (ISM). The AFRC (1993) equations and the lactation curve model of Morant and Gnanasakthy (1989) were used to construct the cow submodel. The ISM simulates on a daily basis the CH4 production, milk yield, live weight changes associated with lactation and dry matter intake. The total daily CH

  11. Evaluation of HOx sources and cycling using measurement-constrained model calculations in a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated ecosystem

    Directory of Open Access Journals (Sweden)

    S. B. Henry

    2013-02-01

    Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2

  12. The Fire Locating and Modeling of Burning Emissions (FLAMBE) Project

    Science.gov (United States)

    Reid, J. S.; Prins, E. M.; Westphal, D.; Richardson, K.; Christopher, S.; Schmidt, C.; Theisen, M.; Eck, T.; Reid, E. A.

    2001-12-01

    The Fire Locating and Modeling of Burning Emissions (FLAMBE) project was initiated by NASA, the US Navy and NOAA to monitor biomass burning and burning emissions on a global scale. The idea behind the mission is to integrate remote sensing data with global and regional transport models in real time for the purpose of providing the scientific community with smoke and fire products for planning and research purposes. FLAMBE is currently utilizing real time satellite data from GOES satellites, fire products based on the Wildfire Automated Biomass Burning Algorithm (WF_ABBA) are generated for the Western Hemisphere every 30 minutes with only a 90 minute processing delay. We are currently collaborating with other investigators to gain global coverage. Once generated, the fire products are used to input smoke fluxes into the NRL Aerosol Analysis and Prediction System, where advection forecasts are performed for up to 6 days. Subsequent radiative transfer calculations are used to estimate top of atmosphere and surface radiative forcing as well as surface layer visibility. Near real time validation is performed using field data collected by Aerosol Robotic Network (AERONET) Sun photometers. In this paper we fully describe the FLAMBE project and data availability. Preliminary result from the previous year will also be presented, with an emphasis on the development of algorithms to determine smoke emission fluxes from individual fire products. Comparisons to AERONET Sun photometer data will be made.

  13. A constrained dispersive optical model for the neutron-nucleus interaction from -80 to +80 MeV for the mass region 27≤A≤32

    International Nuclear Information System (INIS)

    Al-Ohali, M.A.; Howell, C.R.; Tornow, W.; Walter, R.L.

    1995-01-01

    A Constrained Dispersive Optical Model (CDOM) analysis was performed for the neutron-nucleus interaction in the energy domain from -80 to 80 MeV for the three nuclei in the center of the 2s-1d shell nuclei. The CDOM incorporates the dispersion relation which connects the real and imaginary parts of the nuclear mean field. Parameters for the model were derived by fitting the neutron differential elastic cross-section, the total cross-section, and the analyzing power data for 27 Al, 28 Si, and 32 S. The parameters were also adjusted slightly to improve overall agreement to single-particle bound-state energies

  14. Estimation of landfill emission lifespan using process oriented modeling

    International Nuclear Information System (INIS)

    Ustohalova, Veronika; Ricken, Tim; Widmann, Renatus

    2006-01-01

    Depending on the particular pollutants emitted, landfills may require service activities lasting from hundreds to thousands of years. Flexible tools allowing long-term predictions of emissions are of key importance to determine the nature and expected duration of maintenance and post-closure activities. A highly capable option represents predictions based on models and verified by experiments that are fast, flexible and allow for the comparison of various possible operation scenarios in order to find the most appropriate one. The intention of the presented work was to develop a experimentally verified multi-dimensional predictive model capable of quantifying and estimating processes taking place in landfill sites where coupled process description allows precise time and space resolution. This constitutive 2-dimensional model is based on the macromechanical theory of porous media (TPM) for a saturated thermo-elastic porous body. The model was used to simulate simultaneously occurring processes: organic phase transition, gas emissions, heat transport, and settlement behavior on a long time scale for municipal solid waste deposited in a landfill. The relationships between the properties (composition, pore structure) of a landfill and the conversion and multi-phase transport phenomena inside it were experimentally determined. In this paper, we present both the theoretical background of the model and the results of the simulations at one single point as well as in a vertical landfill cross section

  15. A model for neutrino emission from nuclear accretion disks

    Science.gov (United States)

    Deaton, Michael

    2015-04-01

    Compact object mergers involving at least one neutron star can produce short-lived black hole accretion engines. Over tens to hundreds of milliseconds such an engine consumes a disk of hot, nuclear-density fluid, and drives changes to its surrounding environment through luminous emission of neutrinos. The neutrino emission may drive an ultrarelativistic jet, may peel off the disk's outer layers as a wind, may irradiate those winds or other forms of ejecta and thereby change their composition, may change the composition and thermodynamic state of the disk itself, and may oscillate in its flavor content. We present the full spatial-, angular-, and energy-dependence of the neutrino distribution function around a realistic model of a nuclear accretion disk, to inform future explorations of these types of behaviors. Spectral Einstein Code (SpEC).

  16. Objective Characterization of Snow Microstructure for Microwave Emission Modeling

    Science.gov (United States)

    Durand, Michael; Kim, Edward J.; Molotch, Noah P.; Margulis, Steven A.; Courville, Zoe; Malzler, Christian

    2012-01-01

    Passive microwave (PM) measurements are sensitive to the presence and quantity of snow, a fact that has long been used to monitor snowcover from space. In order to estimate total snow water equivalent (SWE) within PM footprints (on the order of approx 100 sq km), it is prerequisite to understand snow microwave emission at the point scale and how microwave radiation integrates spatially; the former is the topic of this paper. Snow microstructure is one of the fundamental controls on the propagation of microwave radiation through snow. Our goal in this study is to evaluate the prospects for driving the Microwave Emission Model of Layered Snowpacks with objective measurements of snow specific surface area to reproduce measured brightness temperatures when forced with objective measurements of snow specific surface area (S). This eliminates the need to treat the grain size as a free-fit parameter.

  17. Combustion optimization and HCCI modeling for ultra low emission

    Energy Technology Data Exchange (ETDEWEB)

    Koten, Hasan; Yilmaz, Mustafa; Zafer Gul, M. [Marmara University Mechanical Engineering Department (Turkey)], E-mail: hasan.koten@marmara.edu.tr

    2011-07-01

    With the coming shortage of fossil fuels and the rising concerns over the environment it is important to develop new technologies both to reduce energy consumption and pollution at the same time. In the transportation sector, new combustion processes are under development to provide clean diesel combustion with no particulate or NOx emissions. However, these processes have issues such as limited power output, high levels of unburned hydrocarbons, and carbon monoxide emissions. The aim of this paper is to present a methodology for optimizing combustion performance. The methodology consists of the use of a multi-objective genetic algorithm optimization tool; homogeneous charge compression ignition engine cases were studied with the ECFM-3Z combustion model. Results showed that injected fuel mass led to a decrease in power output, a finding which is in keeping with previous research. This paper presented on optimization tool which can be useful in improving the combustion process.

  18. Impact of a highly detailed emission inventory on modeling accuracy

    Science.gov (United States)

    Taghavi, M.; Cautenet, S.; Arteta, J.

    2005-03-01

    During Expérience sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions (ESCOMPTE) campaign (June 10 to July 14, 2001), two pollution events observed during an intensive measurement period (IOP2a and IOP2b) have been simulated. The comprehensive Regional Atmospheric Modeling Systems (RAMS) model, version 4.3, coupled online with a chemical module including 29 species is used to follow the chemistry of a polluted zone over Southern France. This online method takes advantage of a parallel code and use of the powerful computer SGI 3800. Runs are performed with two emission inventories: the Emission Pre Inventory (EPI) and the Main Emission Inventory (MEI). The latter is more recent and has a high resolution. The redistribution of simulated chemical species (ozone and nitrogen oxides) is compared with aircraft and surface station measurements for both runs at regional scale. We show that the MEI inventory is more efficient than the EPI in retrieving the redistribution of chemical species in space (three-dimensional) and time. In surface stations, MEI is superior especially for primary species, like nitrogen oxides. The ozone pollution peaks obtained from an inventory, such as EPI, have a large uncertainty. To understand the realistic geographical distribution of pollutants and to obtain a good order of magnitude in ozone concentration (in space and time), a high-resolution inventory like MEI is necessary. Coupling RAMS-Chemistry with MEI provides a very efficient tool able to simulate pollution plumes even in a region with complex circulations, such as the ESCOMPTE zone.

  19. Direct stratospheric injection of biomass burning emissions: a case study of the 2009 Australian bushfires using the NASA GISS ModelE2 composition-climate model

    Science.gov (United States)

    Field, Robert; From, Mike; Voulgarakis, Apostolos; Shindell, Drew; Flannigan, Mike; Bernath, Peter

    2014-05-01

    Direct stratospheric injection (DSI) of forest fire smoke represents a direct biogeochemical link between the land surface and stratosphere. DSI events occur regularly in the northern and southern extratropics, and have been observed across a wide range of measurements, but their fate and effects are not well understood. DSIs result from explosive, short-lived fires, and their plumes stand out from background concentrations immediately. This makes it easier to associate detected DSIs to individual fires and their estimated emissions. Because the emissions pulses are brief, chemical decay can be more clearly assessed, and because the emissions pulses are so large, a wide range of rare chemical species can be detected. Observational evidence suggests that they can persist in the stratosphere for several months, enhance ozone production, and be self-lofted to the middle stratosphere through shortwave absorption and diabatic heating. None of these phenomena have been evaluated, however, with a physical model. To that end, we are simulating the smoke plumes from the February 2009 Australia 'Black Saturday' bushfires using the NASA GISS ModelE2 composition-climate model, nudged toward horizontal winds from reanalysis. To-date, this is the best-observed DSI in the southern hemisphere. Chemical and aerosol signatures of the plume were observed in a wide array of limb and nadir satellite retrievals. Detailed estimates of fuel consumption and injection height have been made because of the severity of the fires. Uncommon among DSIs events was a large segment of the plume that entrained into the upper equatorial easterlies. Preliminary modeling results show that the relative strengths of the equatorial and extratropical plume segments are sensitive to the plume's initial injection height. This highlights the difficulty in reconciling uncertainty in the reanalysis over the Southern Hemisphere with fairly-well constrained estimates of fire location and injection height at the

  20. Challenges and Conundrums in Modeling Global Methane Emissions from Wetlands: An Empiricist's Viewpoint

    Science.gov (United States)

    Bridgham, S. D.

    2015-12-01

    Wetlands emit a third to half of the global CH4 flux and have the largest uncertainty of any emission source. Moreover, wetlands have provided an important radiative feedback to climate in the geologic and recent past. A number of largescale wetland CH4 models have been developed recently, but intermodel comparisons show wide discrepancies in their predictions. I present an empiricist's overview of the current limitations and challenges of more accurately modeling wetland CH4 emissions. One of the largest limitations is simply the poor knowledge of wetland area, with estimated global values varying by a more than a factor of three. The areas of seasonal and tropical wetlands are particularly poorly constrained. There are also few wetlands with complete, multi-year datasets for all of the input variables for many models, and this lack of data is particularly alarming in tropical wetlands given that they are arguably the single largest natural or anthropogenic global CH4 source. Almost all largescale CH4 models have little biogeochemical mechanistic detail and treat anaerobic carbon cycling in a highly simplified manner. The CH4:CO2 ratio in anaerobic carbon mineralization is a central parameter in many models, but is at most set at a few values with no mechanistic underpinning. However, empirical data show that this ratio varies by five orders of magnitude in different wetlands, and tropical wetlands appear to be particularly methanogenic, all for reasons that are very poorly understood. The predominance of the acetoclastic pathway of methanogenesis appears to be related to total CH4 production, but different methanogenesis pathways are generally not incorporated into models. Other important anaerobic processes such as humic substances acting as terminal electron acceptors, fermentation, homoacetogenesis, and anaerobic CH4 oxidation are also not included in most models despite evidence of their importance in empirical studies. Moreover, there has been an explosion

  1. Application of GIS to modified models of vehicle emission dispersion

    Science.gov (United States)

    Jin, Taosheng; Fu, Lixin

    This paper reports on a preliminary study of the forecast and evaluation of transport-related air pollution dispersion in urban areas. Some modifications of the traditional Gauss dispersion models are provided, and especially a crossroad model is built, which considers the great variation of vehicle emission attributed to different driving patterns at the crossroad. The above models are combined with a self-developed geographic information system (GIS) platform, and a simulative system with graphical interfaces is built. The system aims at visually describing the influences on the urban environment by urban traffic characteristics and therefore gives a reference to the improvement of urban air quality. Due to the introduction of a self-developed GIS platform and a creative crossroad model, the system is more effective, flexible and accurate. Finally, a comparison of the simulated (predicted) and observed hourly concentration is given, which indicates a good simulation.

  2. Urban scale air quality modelling using detailed traffic emissions estimates

    Science.gov (United States)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  3. Constraints on pulsed emission model for repeating FRB 121102

    Science.gov (United States)

    Kisaka, Shota; Enoto, Teruaki; Shibata, Shinpei

    2017-12-01

    Recent localization of the repeating fast radio burst (FRB) 121102 revealed the distance of its host galaxy and luminosities of the bursts. We investigated constraints on the young neutron star (NS) model, that (a) the FRB intrinsic luminosity is supported by the spin-down energy, and (b) the FRB duration is shorter than the NS rotation period. In the case of a circular cone emission geometry, conditions (a) and (b) determine the NS parameters within very small ranges, compared with that from only condition (a) discussed in previous works. Anisotropy of the pulsed emission does not affect the area of the allowed parameter region by virtue of condition (b). The determined parameters are consistent with those independently limited by the properties of the possible persistent radio counterpart and the circumburst environments such as surrounding materials. Since the NS in the allowed parameter region is older than the spin-down timescale, the hypothetical GRP (giant radio pulse)-like model expects a rapid radio flux decay of ≲1 Jy within a few years as the spin-down luminosity decreases. The continuous monitoring will provide constraints on the young NS models. If no flux evolution is seen, we need to consider an alternative model, e.g., the magnetically powered flare.

  4. Modelling of non-thermal electron cyclotron emission during ECRH

    International Nuclear Information System (INIS)

    Tribaldos, V.; Krivenski, V.

    1990-01-01

    The existence of suprathermal electrons during Electron Cyclotron Resonance Heating experiments in tokamaks is today a well established fact. At low densities the creation of large non-thermal electron tails affects the temperature profile measurements obtained by 2 nd harmonic, X-mode, low-field side, electron cyclotron emission. At higher densities suprathermal electrons can be detected by high-field side emission. In electron cyclotron current drive experiments a high energy suprathermal tail, asymmetric in v, is observed. Non-Maxwellian electron distribution functions are also typically observed during lower-hybrid current drive experiments. Fast electrons have been observed during ionic heating by neutral beams as well. Two distinct approaches are currently used in the interpretation of the experimental results: simple analytical models which reproduce some of the expected non-Maxwellian characteristics of the electron distribution function are employed to get a qualitative picture of the phenomena; sophisticated numerical Fokker-Planck calculations give the electron distribution function from which the emission spectra are computed. No algorithm is known to solve the inverse problem, i.e. to compute the electron distribution function from the emitted spectra. The proposed methods all relay on the basic assumption that the electron distribution function has a given functional dependence on a limited number of free parameters, which are then 'measured' by best fitting the experimental results. Here we discuss the legitimacy of this procedure. (author) 7 refs., 5 figs

  5. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  6. Comparison of models used for national agricultural ammonia emission inventories in Europe

    DEFF Research Database (Denmark)

    Reidy, B; Dämmgen, U; Döhler, H

    2008-01-01

    and harmonized the available knowledge on emission factors (EFs) for nitrogen (N)-flow emission calculation models and initiated a new generation of emission inventories. As a first step in summarizing the available knowledge, six N-flow models, used to calculate national NH3 emissions from agriculture...... the variation in the results generated awareness and consensus concerning available scientific data and the importance of specific processes not yet included in some models...

  7. MODELING ATMOSPHERIC EMISSION FOR CMB GROUND-BASED OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Errard, J.; Borrill, J. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Ade, P. A. R. [School of Physics and Astronomy, Cardiff University, Cardiff CF10 3XQ (United Kingdom); Akiba, Y.; Chinone, Y. [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801 (Japan); Arnold, K.; Atlas, M.; Barron, D.; Elleflot, T. [Department of Physics, University of California, San Diego, CA 92093-0424 (United States); Baccigalupi, C.; Fabbian, G. [International School for Advanced Studies (SISSA), Trieste I-34014 (Italy); Boettger, D. [Department of Astronomy, Pontifica Universidad Catolica de Chile (Chile); Chapman, S. [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, NS, B3H 4R2 (Canada); Cukierman, A. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Delabrouille, J. [AstroParticule et Cosmologie, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité (France); Dobbs, M.; Gilbert, A. [Physics Department, McGill University, Montreal, QC H3A 0G4 (Canada); Ducout, A.; Feeney, S. [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom); Feng, C. [Department of Physics and Astronomy, University of California, Irvine (United States); and others

    2015-08-10

    Atmosphere is one of the most important noise sources for ground-based cosmic microwave background (CMB) experiments. By increasing optical loading on the detectors, it amplifies their effective noise, while its fluctuations introduce spatial and temporal correlations between detected signals. We present a physically motivated 3D-model of the atmosphere total intensity emission in the millimeter and sub-millimeter wavelengths. We derive a new analytical estimate for the correlation between detectors time-ordered data as a function of the instrument and survey design, as well as several atmospheric parameters such as wind, relative humidity, temperature and turbulence characteristics. Using an original numerical computation, we examine the effect of each physical parameter on the correlations in the time series of a given experiment. We then use a parametric-likelihood approach to validate the modeling and estimate atmosphere parameters from the polarbear-i project first season data set. We derive a new 1.0% upper limit on the linear polarization fraction of atmospheric emission. We also compare our results to previous studies and weather station measurements. The proposed model can be used for realistic simulations of future ground-based CMB observations.

  8. Puff models for simulation of fugitive radioactive emissions in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Camila P. da, E-mail: camila.costa@ufpel.edu.b [Universidade Federal de Pelotas (UFPel), RS (Brazil). Inst. de Fisica e Matematica. Dept. de Matematica e Estatistica; Pereira, Ledina L., E-mail: ledinalentz@yahoo.com.b [Universidade do Extremo Sul Catarinense (UNESC), Criciuma, SC (Brazil); Vilhena, Marco T., E-mail: vilhena@pq.cnpq.b [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tirabassi, Tiziano, E-mail: t.tirabassi@isac.cnr.i [Institute of Atmospheric Sciences and Climate (CNR/ISAC), Bologna (Italy)

    2009-07-01

    A puff model for the dispersion of material from fugitive radioactive emissions is presented. For vertical diffusion the model is based on general techniques for solving time dependent advection-diffusion equation: the ADMM (Advection Diffusion Multilayer Method) and GILTT (Generalized Integral Laplace Transform Technique) techniques. The first one is an analytical solution based on a discretization of the Atmospheric Boundary Layer (ABL) in sub-layers where the advection-diffusion equation is solved by the Laplace transform technique. The solution is given in integral form. The second one is a well-known hybrid method that had solved a wide class of direct and inverse problems mainly in the area of Heat Transfer and Fluid Mechanics and the solution is given in series form. Comparisons between values predicted by the models against experimental ground-level concentrations are shown. (author)