A Constrained Standard Model: Effects of Fayet-Iliopoulos Terms
International Nuclear Information System (INIS)
Barbieri, Riccardo; Hall, Lawrence J.; Nomura, Yasunori
2001-01-01
In (1)the one Higgs doublet standard model was obtained by an orbifold projection of a 5D supersymmetric theory in an essentially unique way, resulting in a prediction for the Higgs mass m H = 127 +- 8 GeV and for the compactification scale 1/R = 370 +- 70 GeV. The dominant one loop contribution to the Higgs potential was found to be finite, while the above uncertainties arose from quadratically divergent brane Z factors and from other higher loop contributions. In (3), a quadratically divergent Fayet-Iliopoulos term was found at one loop in this theory. We show that the resulting uncertainties in the predictions for the Higgs boson mass and the compactification scale are small, about 25percent of the uncertainties quoted above, and hence do not affect the original predictions. However, a tree level brane Fayet-Iliopoulos term could, if large enough, modify these predictions, especially for 1/R.
McPherson, E. E.; Mattioli, G. S.
2013-05-01
Soufriere Hills Volcano (SHV), Montserrat, in the Lesser Antilles island arc, became active in 1995, and for nearly two decades, ground breaking geodetic surveys have been conducted using both continuous GPS and campaign GPS sites. Data have been collected and processed using the latest and most advanced geodetic instruments and technique available. The NSF- funded CALIPSO and SEA-CALIPSO projects have allowed for some of the most in depth studies of the ongoing SHV eruptions to date, and many models for surface deformation and magmatic chamber configuration have resulted. Research for this study is constrained to data gathered from the early stages of eruption in 1996 through 2010 from two continuous GPS sites, Hermitage Peak (HERM - located ~1.6 km from the vent) and Montserrat Volcano Observatory 1 (MVO1- located ~7.6 km away from the vent) and have been reprocessed using GIPSY-OASIS II (v. 6.1.2) with final, precise IGS08 orbits, clocks, and earth orientation parameters using an absolute point positioning (APP) strategy. Our study is being conducted to re-examine spatial and temporal changes in surface deformation, constrained by GPS, and to better illuminate the short term (i.e. sub-daily to weekly) deformation signals noted amongst the longer, cyclic deformation signals (i.e. monthly to annually) that have been previously reported and modeled. The reprocessed time-series show lower variance for daily APP solutions over the entire temporal data set; trends in the long-term inflation and deflation patterns are similar to those previously published (e.g. Elsworth et al., 2008; Mattioli et al., 2010; Odbert et al., 2012), but now superimposed, shorter term signals are more clearly visible. New elastic deformation models are being developed and will be presented for these short-term signals.
International Nuclear Information System (INIS)
Latorre, J.I.; Luetken, C.A.
1988-11-01
We construct a large new class of two dimensional sigma models with Kaehler target spaces which are algebraic manifolds realized as complete interactions in weighted CP n spaces. They are N=2 superconformally symmetric and particular choices of constraints give Calabi-Yau target spaces which are nontrivial string vacua. (orig.)
Yumimoto, Keiya; Morino, Yu; Ohara, Toshimasa; Oura, Yasuji; Ebihara, Mitsuru; Tsuruta, Haruo; Nakajima, Teruyuki
2016-11-01
The amount of 137 Cs released by the Fukushima Dai-ichi Nuclear Power Plant accident of 11 March 2011 was inversely estimated by integrating an atmospheric dispersion model, an a priori source term, and map of deposition recorded by aircraft. An a posteriori source term refined finer (hourly) variations comparing with the a priori term, and estimated 137 Cs released 11 March to 2 April to be 8.12 PBq. Although time series of the a posteriori source term was generally similar to those of the a priori source term, notable modifications were found in the periods when the a posteriori source term was well-constrained by the observations. Spatial pattern of 137 Cs deposition with the a posteriori source term showed better agreement with the 137 Cs deposition monitored by aircraft. The a posteriori source term increased 137 Cs deposition in the Naka-dori region (the central part of Fukushima Prefecture) by 32.9%, and considerably improved the underestimated a priori 137 Cs deposition. Observed values of deposition measured at 16 stations and surface atmospheric concentrations collected on a filter tape of suspended particulate matter were used for validation of the a posteriori results. A great improvement was found in surface atmospheric concentration on 15 March; the a posteriori source term reduced root mean square error, normalized mean error, and normalized mean bias by 13.4, 22.3, and 92.0% for the hourly values, respectively. However, limited improvements were observed in some periods and areas due to the difficulty in simulating accurate wind fields and the lack of the observational constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling the Microstructural Evolution During Constrained Sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
2015-01-01
A numerical model able to simulate solid-state constrained sintering is presented. The model couples an existing kinetic Monte Carlo model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress as ...
Constrained optimization with a continuous Hopfield-Lagrange model
J.H. van den Berg (Jan); J.C. Bioch (Cor)
1993-01-01
textabstractIn this paper, a generalized Hopfield model with continuous neurons using Lagrange multipliers, originally introduced Wacholder, Han &Mann [1989], is thoroughly analysed. We have termed the model the Hopfield-Lagrange model. It can be used to resolve constrained optimization problems. In
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.
A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural respon...
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
1997-01-01
A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
2001-01-01
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
Models of Flux Tubes from Constrained Relaxation
Indian Academy of Sciences (India)
tribpo
J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...
Cosmogenic photons strongly constrain UHECR source models
Directory of Open Access Journals (Sweden)
van Vliet Arjen
2017-01-01
Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.
Constraining supergravity models from gluino production
International Nuclear Information System (INIS)
Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.
1988-01-01
The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
Constrained optimization via simulation models for new product innovation
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Constrained bayesian inference of project performance models
Sunmola, Funlade
2013-01-01
Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...
Constraining hadronic models of the Fermi bubbles
Razzaque, Soebur
2018-01-01
The origin of sub-TeV gamma rays detected by Fermi-LAT from the Fermi bubbles at the Galactic center is unknown. In a hadronic model, acceleration of protons and/or nuclei and their subsequent interactions with gas in the bubble volume can produce observed gamma ray. Such interactions naturally produce high-energy neutrinos, and detection of those can discriminate between a hadronic and a leptonic origin of gamma rays. Additional constraints on the Fermi bubbles gamma-ray flux in the PeV range from recent HAWC observations restrict hadronic model parameters, which in turn disfavor Fermi bubbles as the origin of a large fraction of neutrino events detected by IceCube along the bubble directions. We revisit our hadronic model and discuss future constraints on parameters from observations in very high-energy gamma rays by CTA and in neutrinos.
Constraining composite Higgs models using LHC data
Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar
2018-03-01
We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.
Directory of Open Access Journals (Sweden)
Zhenggang Du
2015-03-01
Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also
Robust discriminative response map fitting with constrained local models
Asthana, Akshay; Asthana, Ashish; Zafeiriou, Stefanos; Cheng, Shiyang; Pantic, Maja
We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that,
3D facial geometric features for constrained local model
Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Ashish; Asthana, Akshay; Pantic, Maja
2014-01-01
We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the
Frequency Constrained ShiftCP Modeling of Neuroimaging Data
DEFF Research Database (Denmark)
Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.
2011-01-01
The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...
Constraining new physics models with isotope shift spectroscopy
Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias
2017-07-01
Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.
The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.
von Davier, Matthias
2014-02-01
The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.
Toward Cognitively Constrained Models of Language Processing: A Review
Directory of Open Access Journals (Sweden)
Margreet Vogelzang
2017-09-01
Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we
CONSTRAINING INTRACLUSTER GAS MODELS WITH AMiBA13
International Nuclear Information System (INIS)
Molnar, Sandor M.; Umetsu, Keiichi; Ho, Paul T. P.; Koch, Patrick M.; Victor Liao, Yu-Wei; Lin, Kai-Yang; Liu, Guo-Chin; Nishioka, Hiroaki; Birkinshaw, Mark; Bryan, Greg; Haiman, Zoltan; Shang, Cien; Hearn, Nathan; Huang, Chih-Wei Locutus; Wang, Fu-Cheng; Wu, Jiun-Huei Proty
2010-01-01
Clusters of galaxies have been extensively used to determine cosmological parameters. A major difficulty in making the best use of Sunyaev-Zel'dovich (SZ) and X-ray observations of clusters for cosmology is that using X-ray observations it is difficult to measure the temperature distribution and therefore determine the density distribution in individual clusters of galaxies out to the virial radius. Observations with the new generation of SZ instruments are a promising alternative approach. We use clusters of galaxies drawn from high-resolution adaptive mesh refinement cosmological simulations to study how well we should be able to constrain the large-scale distribution of the intracluster gas (ICG) in individual massive relaxed clusters using AMiBA in its configuration with 13 1.2 m diameter dishes (AMiBA13) along with X-ray observations. We show that non-isothermal β models provide a good description of the ICG in our simulated relaxed clusters. We use simulated X-ray observations to estimate the quality of constraints on the distribution of gas density, and simulated SZ visibilities (AMiBA13 observations) for constraints on the large-scale temperature distribution of the ICG. We find that AMiBA13 visibilities should constrain the scale radius of the temperature distribution to about 50% accuracy. We conclude that the upgraded AMiBA, AMiBA13, should be a powerful instrument to constrain the large-scale distribution of the ICG.
Constraining viscous dark energy models with the latest cosmological data
Energy Technology Data Exchange (ETDEWEB)
Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Yan, Yang-Jie; Meng, Xin-He [Nankai University, Department of Physics, Tianjin (China)
2017-10-15
Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H{sub 0} tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios. (orig.)
Modeling constrained sintering of bi-layered tubular structures
DEFF Research Database (Denmark)
Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei
2015-01-01
Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe...... the densification and stress developments during sintering of tubular bi-layered samples. The correspondence between linear elastic and linear viscous theories is used as a basis for derivation of the model. The developed model is first verified by finite element simulation for sintering of tubular bi-layer system....... Furthermore, the model is validated using densification results from sintering of bi-layered tubular ceramic oxygen membrane based on porous MgO and Ce0.9Gd0.1O1.95-d layers. Model input parameters, such as the shrinkage kinetics and viscous parameters are obtained experimentally using optical dilatometry...
Applying Atmospheric Measurements to Constrain Parameters of Terrestrial Source Models
Hyer, E. J.; Kasischke, E. S.; Allen, D. J.
2004-12-01
Quantitative inversions of atmospheric measurements have been widely applied to constrain atmospheric budgets of a range of trace gases. Experiments of this type have revealed persistent discrepancies between 'bottom-up' and 'top-down' estimates of source magnitudes. The most common atmospheric inversion uses the absolute magnitude as the sole parameter for each source, and returns the optimal value of that parameter. In order for atmospheric measurements to be useful for improving 'bottom-up' models of terrestrial sources, information about other properties of the sources must be extracted. As the density and quality of atmospheric trace gas measurements improve, examination of higher-order properties of trace gas sources should become possible. Our model of boreal forest fire emissions is parameterized to permit flexible examination of the key uncertainties in this source. Using output from this model together with the UM CTM, we examined the sensitivity of CO concentration measurements made by the MOPITT instrument to various uncertainties in the boreal source: geographic distribution of burned area, fire type (crown fires vs. surface fires), and fuel consumption in above-ground and ground-layer fuels. Our results indicate that carefully designed inversion experiments have the potential to help constrain not only the absolute magnitudes of terrestrial sources, but also the key uncertainties associated with 'bottom-up' estimates of those sources.
An inexact fuzzy-chance-constrained air quality management model.
Xu, Ye; Huang, Guohe; Qin, Xiaosheng
2010-07-01
Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.
Modeling Atmospheric CO2 Processes to Constrain the Missing Sink
Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.
2005-01-01
We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.
Computational Data Modeling for Network-Constrained Moving Objects
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.
2003-01-01
Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile users geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...... representation. These capture aspects of the problem domain that are required in order to support the querying that underlies the envisioned location-based services....
Maximizing entropy of image models for 2-D constrained coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Danieli, Matteo; Burini, Nino
2010-01-01
This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...
Sampling from stochastic reservoir models constrained by production data
Energy Technology Data Exchange (ETDEWEB)
Hegstad, Bjoern Kaare
1997-12-31
When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.
Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model
Directory of Open Access Journals (Sweden)
Xiaoyang Zhou
2016-01-01
Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
Bayesian Variable Selection on Model Spaces Constrained by Heredity Conditions.
Taylor-Rodriguez, Daniel; Womack, Andrew; Bliznyuk, Nikolay
2016-01-01
This paper investigates Bayesian variable selection when there is a hierarchical dependence structure on the inclusion of predictors in the model. In particular, we study the type of dependence found in polynomial response surfaces of orders two and higher, whose model spaces are required to satisfy weak or strong heredity conditions. These conditions restrict the inclusion of higher-order terms depending upon the inclusion of lower-order parent terms. We develop classes of priors on the model space, investigate their theoretical and finite sample properties, and provide a Metropolis-Hastings algorithm for searching the space of models. The tools proposed allow fast and thorough exploration of model spaces that account for hierarchical polynomial structure in the predictors and provide control of the inclusion of false positives in high posterior probability models.
Constraining Agricultural Irrigation Surface Energy Budget Feedbacks in Atmospheric Models
Aufforth, M. E.; Desai, A. R.; Suyker, A.
2017-12-01
The expansion and modernization of irrigation increased the relevance of knowing the effects it has on regional weather and climate feedbacks. We conducted a set of observationally-constrained simulations determining the result irrigation exhibits on the surface energy budget, the atmospheric boundary layer, and regional precipitation feedbacks. Eddy covariance flux tower observations were analyzed from two irrigated and one rain-fed corn/soybean rotation sites located near Mead, Nebraska. The evaluated time period covered the summer growing months of June, July, and August (JJA) during the years when corn grew at all three sites. As a product of higher continuous surface moisture availability, the irrigated crops had significantly higher amounts of energy partitioned towards latent heating than the non-irrigated site. The daily average peak of latent heating at the rain-fed site occurred before the irrigated sites and was approximately 45 W/m2 lower. Land surface models were evaluated on their ability to reproduce these effects, including those used in numerical weather prediction and those used in agricultural carbon cycle projection. Model structure, mechanisms, and parameters that best represent irrigation-surface energy impacts will be compared and discussed.
Constrained variability of modeled T:ET ratio across biomes
Fatichi, Simone; Pappas, Christoforos
2017-07-01
A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.
Dark matter in a constrained E6 inspired SUSY model
International Nuclear Information System (INIS)
Athron, P.; Harries, D.; Nevzorov, R.; Williams, A.G.
2016-01-01
We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.
Li, Duo; Liu, Yajing
2017-04-01
Along-strike segmentation of slow-slip events (SSEs) and nonvolcanic tremors in Cascadia may reflect heterogeneities of the subducting slab or overlying continental lithosphere. However, the nature behind this segmentation is not fully understood. We develop a 3-D model for episodic SSEs in northern and central Cascadia, incorporating both seismological and gravitational observations to constrain the heterogeneities in the megathrust fault properties. The 6 year automatically detected tremors are used to constrain the rate-state friction parameters. The effective normal stress at SSE depths is constrained by along-margin free-air and Bouguer gravity anomalies. The along-strike variation in the long-term plate convergence rate is also taken into consideration. Simulation results show five segments of ˜Mw6.0 SSEs spontaneously appear along the strike, correlated to the distribution of tremor epicenters. Modeled SSE recurrence intervals are equally comparable to GPS observations using both types of gravity anomaly constraints. However, the model constrained by free-air anomaly does a better job in reproducing the cumulative slip as well as more consistent surface displacements with GPS observations. The modeled along-strike segmentation represents the averaged slip release over many SSE cycles, rather than permanent barriers. Individual slow-slip events can still propagate across the boundaries, which may cause interactions between adjacent SSEs, as observed in time-dependent GPS inversions. In addition, the moment-duration scaling is sensitive to the selection of velocity criteria for determining when SSEs occur. Hence, the detection ability of the current GPS network should be considered in the interpretation of slow earthquake source parameter scaling relations.
Future sea level rise constrained by observations and long-term commitment.
Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda
2016-03-08
Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections.
Future sea level rise constrained by observations and long-term commitment
Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda
2016-01-01
Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28–56 cm, 37–77 cm, and 57–131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The “constrained extrapolation” approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648
The affine constrained GNSS attitude model and its multivariate integer least-squares solution
Teunissen, P.J.G.
2012-01-01
A new global navigation satellite system (GNSS) carrier-phase attitude model and its solution are introduced in this contribution. This affine-constrained GNSS attitude model has the advantage that it avoids the computational complexity of the orthonormality-constrained GNSS attitude model, while it
Global velocity constrained cloud motion prediction for short-term solar forecasting
Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping
2016-09-01
Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.
International Nuclear Information System (INIS)
Aghaei, Jamshid; Nikoobakht, Ahmad; Siano, Pierluigi; Nayeripour, Majid; Heidari, Alireza; Mardaneh, Mohammad
2016-01-01
This paper proposes a stochastic model for the scheduling of short-term AC security-constrained unit commitment (AC-SCUC) considering reliability and the value of lost load (VOLL). The uncertainty of load and wind power generation, active and reactive power losses, voltage profile of the network’ buses and congestion management for different VOLLs are investigated in this paper. Furthermore, the random outages of generating unit and transmission lines are modeled based on the scenario trees in the Monte Carlo simulation and the reserve requirements of the power system are implicitly scheduled based on the VOLL and by considering corrective actions of the generation units. A computationally efficient two-stage algorithm based on bender's decomposition is proposed to solve the proposed problem. The first stage deals with the base case where all the network components, the units' outputs and on/off status can be determined based on the forecasting load and wind farms' output. The second stage investigates the stochastic part of the problem and runs the possible scenarios in parallel for all the network elements and the available units of the base case. In the case of any violation for a scenario, a bender's cut is added to the first stage which modifies the commitment state and the power units' outputs in order to tackle the violation for that scenario. The method is applied to the IEEE 118/300-bus test system to assess its applicability and capability. - Highlights: • This paper proposes a stochastic model for the scheduling of short-term AC security-constrained unit commitment (AC-SCUC). • The uncertainty of load and wind power generation in this paper. • A two-stage algorithm based on bender's decomposition is proposed in order to solve the proposed problem. • This paper proposes AC-SCUC considering reliability and the value of lost load (VOLL).
Wickramasuriya, R.C.; Bregt, A.K.; Delden, van H.; Hagen-Zanker, A.
2009-01-01
This paper presents an extension to the Constrained Cellular Automata (CCA) land use model of White et al. [White, R., Engelen, G., Uljee, I., 1997. The use of constrained cellular automata for high-resolution modelling of urban land-use dynamics. Environment and Planning B: Planning and Design
Constraining RS Models by Future Flavor and Collider Measurements: A Snowmass Whitepaper
Energy Technology Data Exchange (ETDEWEB)
Agashe, Kaustubh [Maryland U.; Bauer, Martin [Chicago U., EFI; Goertz, Florian [Zurich, ETH; Lee, Seung J. [Korea Inst. Advanced Study, Seoul; Vecchi, Luca [Maryland U.; Wang, Lian-Tao [Chicago U., EFI; Yu, Felix [Fermilab
2013-10-03
Randall-Sundrum models are models of quark flavor, because they explain the hierarchies in the quark masses and mixings in terms of order one localization parameters of extra dimensional wavefunctions. The same small numbers which generate the light quark masses suppress contributions to flavor violating tree level amplitudes. In this note we update universal constraints from electroweak precision parameters and demonstrate how future measurements of flavor violation in ultra rare decay channels of Kaons and B mesons will constrain the parameter space of this type of models. We show how collider signatures are correlated with these flavor measurements and compute projected limits for direct searches at the 14 TeV LHC run, a 14 TeV LHC luminosity upgrade, a 33 TeV LHC energy upgrade, and a potential 100 TeV machine. We further discuss the effects of a warped model of leptons in future measurements of lepton flavor violation.
Constraining unified dark matter models with weak lensing
Energy Technology Data Exchange (ETDEWEB)
Camera, S. [Dipartimento di Fisica Generale Amedeo Avogadro, Universita degli Studi di Torino, Torino (Italy); Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Torino, Torino (Italy)
2010-04-15
Unified Dark Matter (UDM) models provide an intriguing alternative to Dark Matter (DM) and Dark Energy (DE) through only one exotic component, i.e. a classical scalar field {phi}(t,x). Thanks to a non-canonical kinetic term, this scalar field can mimic both the behaviour of the matter-dominated era at earlier times, as DM do, and the outcoming late-time acceleration, as a cosmological constant DE. Thus, it has been shown that these models can reproduce the same expansion history of the {lambda}CDM concordance model. In this work I review the first prediction of a physical observable, the power spectrum of the weak lensing cosmic convergence (shear). I present the weak lensing signal as predicted by the standard {lambda}CDM model and by a family of viable UDM models parameterized by the late-time sound speed c{sub {infinity}} of the scalar field.last-scattering surface and a series of background galaxies peaked at different redshifts and spread over different redshifts as described by a functional form of their distribution of sources. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
Constraining the interacting dark energy models from weak gravity conjecture and recent observations
International Nuclear Information System (INIS)
Chen Ximing; Wang Bin; Pan Nana; Gong Yungui
2011-01-01
We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.
On using cold baryogenesis to constrain the two-Higgs doublet model
DEFF Research Database (Denmark)
Tranberg, A.; Wu, B.
2013-01-01
We consider the creation of the cosmological baryon asymmetry in the Two Higgs Doublet Model. We imagine a situation where the masses of the five Higgs particles and the two Higgs vevs are constrained by collider experiments, and demonstrate how the requirement of successful baryogenesis can be u...... be used to further constrain the remaining 4-dimensional parameter space of the model. We numerically compute the asymmetry within the scenario of Cold Electroweak Baryogenesis, which is particularly straightforward to simulate reliably....
High estimates of supply constrained emissions scenarios for long-term climate risk assessment
International Nuclear Information System (INIS)
Ward, James D.; Mohr, Steve H.; Myers, Baden R.; Nel, Willem P.
2012-01-01
The simulated effects of anthropogenic global warming have become important in many fields and most models agree that significant impacts are becoming unavoidable in the face of slow action. Improvements to model accuracy rely primarily on the refinement of parameter sensitivities and on plausible future carbon emissions trajectories. Carbon emissions are the leading cause of global warming, yet current considerations of future emissions do not consider structural limits to fossil fuel supply, invoking a wide range of uncertainty. Moreover, outdated assumptions regarding the future abundance of fossil energy could contribute to misleading projections of both economic growth and climate change vulnerability. Here we present an easily replicable mathematical model that considers fundamental supply-side constraints and demonstrate its use in a stochastic analysis to produce a theoretical upper limit to future emissions. The results show a significant reduction in prior uncertainty around projected long term emissions, and even assuming high estimates of all fossil fuel resources and high growth of unconventional production, cumulative emissions tend to align to the current medium emissions scenarios in the second half of this century. This significant finding provides much-needed guidance on developing relevant emissions scenarios for long term climate change impact studies. - Highlights: ► GHG emissions from conventional and unconventional fossil fuels modelled nationally. ► Assuming worst-case: large resource, high growth, rapid uptake of unconventional. ► Long-term cumulative emissions align well with the SRES medium emissions scenario. ► High emissions are unlikely to be sustained through the second half of this century. ► Model designed to be easily extended to test other scenarios e.g. energy shortages.
Toyotarity. Term, model, range
Directory of Open Access Journals (Sweden)
Stanisław Borkowski
2013-04-01
Full Text Available The Toyotarity and BOST term was presented in the chapter. The BOST method allows to define relations between material resources and human resources and between human resources and human resources (TOYOTARITY. This term was also invented by the Author (and is legally protected. The idea of methodology is an outcome of 12 years of work.
DEFF Research Database (Denmark)
Zhao, Yongning; Ye, Lin; Pinson, Pierre
2018-01-01
The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity......-Controlled Vector Autoregressive (SC-VAR) model is introduced to obtain sparse model structures in a spatio-temporal wind power forecasting framework by reformulating the original VAR model into a constrained Mixed Integer Non-Linear Programming (MINLP) problem. It allows controlling the sparsity of the coefﬁcient...... and forecasting, the original SC-VAR is modiﬁed and a Correlation-Constrained SC-VAR (CCSC-VAR) is proposed based on spatial correlation information about wind farms. Our approach is evaluated based on a case study of very-short-term forecasting for 25 wind farms in Denmark. Comparison is performed with a set...
Top ten models constrained by b {yields} s{gamma}
Energy Technology Data Exchange (ETDEWEB)
Hewett, J.L. [Stanford Univ., CA (United States)
1994-12-01
The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.
Top ten models constrained by b {yields} s{gamma}
Energy Technology Data Exchange (ETDEWEB)
Hewett, J.L.
1994-05-01
The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found the parameters in some of the models.
Dynamic modeling of a high-speed over-constrained press machine
International Nuclear Information System (INIS)
Li, Yejian; Sun Yu; Peng, Binbin; Hu, Fengfeng
2016-01-01
This paper presents a study on the dynamic modeling of a high-speed over-constrained press machine. The main contribution of the paper is the development of an efficient approach to perform the dynamic analysis of a planner over-constrained mechanism. The key idea is the establishment of a more general methodology, which is to gain a deformation compatibility equation for the over-constrained mechanism on the basis of the deformation compatibility analysis at each position of the mechanisms. And this equation is then used together with the force/moment equilibrium equations obtained by the D'Alembert principle to form a total equation for the dynamics of the over-constrained mechanism. The approach is applied to a particular press machine to validate the effectiveness of the approach, and in the meantime to provide some useful information for the improvement of the design of this press machine.
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Meldrum, Andrew
pricing factors using the sequential regression approach. Our findings suggest that the two models largely provide the same in-sample fit, but loadings from ordinary and risk-adjusted Campbell-Shiller regressions are generally best matched by the shadow rate models. We also find that the shadow rate...... models perform better than the QTSMs when forecasting bond yields out of sample....
Directory of Open Access Journals (Sweden)
H. C. Winsemius
2008-12-01
Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km^{2} in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of
Constraining new physics with collider measurements of Standard Model signatures
Energy Technology Data Exchange (ETDEWEB)
Butterworth, Jonathan M. [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom); Grellscheid, David [IPPP, Department of Physics, Durham University,Durham, DH1 3LE (United Kingdom); Krämer, Michael; Sarrazin, Björn [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany); Yallup, David [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom)
2017-03-14
A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, ‘Constraints On New Theories Using Rivet’, CONTUR, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The CONTUR approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The CONTUR method is highly scaleable to other models and future measurements.
Li, Lianfa; Lurmann, Fred; Habre, Rima; Urman, Robert; Rappaport, Edward; Ritz, Beate; Chen, Jiu-Chiuan; Gilliland, Frank D; Wu, Jun
2017-09-05
Spatiotemporal models to estimate ambient exposures at high spatiotemporal resolutions are crucial in large-scale air pollution epidemiological studies that follow participants over extended periods. Previous models typically rely on central-site monitoring data and/or covered short periods, limiting their applications to long-term cohort studies. Here we developed a spatiotemporal model that can reliably predict nitrogen oxide concentrations with a high spatiotemporal resolution over a long time span (>20 years). Leveraging the spatially extensive highly clustered exposure data from short-term measurement campaigns across 1-2 years and long-term central site monitoring in 1992-2013, we developed an integrated mixed-effect model with uncertainty estimates. Our statistical model incorporated nonlinear and spatial effects to reduce bias. Identified important predictors included temporal basis predictors, traffic indicators, population density, and subcounty-level mean pollutant concentrations. Substantial spatial autocorrelation (11-13%) was observed between neighboring communities. Ensemble learning and constrained optimization were used to enhance reliability of estimation over a large metropolitan area and a long period. The ensemble predictions of biweekly concentrations resulted in an R 2 of 0.85 (RMSE: 4.7 ppb) for NO 2 and 0.86 (RMSE: 13.4 ppb) for NO x . Ensemble learning and constrained optimization generated stable time series, which notably improved the results compared with those from initial mixed-effects models.
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Constrained Optimization Approaches to Estimation of Structural Models
DEFF Research Database (Denmark)
Iskhakov, Fedor; Rust, John; Schjerning, Bertel
2015-01-01
We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...
Constrained Optimization Approaches to Estimation of Structural Models
DEFF Research Database (Denmark)
Iskhakov, Fedor; Jinhyuk, Lee; Rust, John
2016-01-01
We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...
Modeling Power-Constrained Optimal Backlight Dimming for Color Displays
DEFF Research Database (Denmark)
Burini, Nino; Nadernejad, Ehsan; Korhonen, Jari
2013-01-01
In this paper, we present a framework for modeling color liquid crystal displays (LCDs) having local light-emitting diode (LED) backlight with dimming capability. The proposed framework includes critical aspects like leakage, clipping, light diffusion and human perception of luminance and allows...... adjustable penalization of power consumption. Based on the framework, we have designed a set of optimization-based backlight dimming algorithms providing a perceptual optimal balance of clipping and leakage, if necessary. The novel algorithms are compared with several other schemes known from the literature...
Slow Solar Wind: Observable Characteristics for Constraining Modelling
Ofman, L.; Abbo, L.; Antiochos, S. K.; Hansteen, V. H.; Harra, L.; Ko, Y. K.; Lapenta, G.; Li, B.; Riley, P.; Strachan, L.; von Steiger, R.; Wang, Y. M.
2015-12-01
The Slow Solar Wind (SSW) origin is an open issue in the post SOHO era and forms a major objective for planned future missions such as the Solar Orbiter and Solar Probe Plus.Results from spacecraft data, combined with theoretical modeling, have helped to investigate many aspects of the SSW. Fundamental physical properties of the coronal plasma have been derived from spectroscopic and imaging remote-sensing data and in-situ data, and these results have provided crucial insights for a deeper understanding of the origin and acceleration of the SSW.Advances models of the SSW in coronal streamers and other structures have been developed using 3D MHD and multi-fluid equations.Nevertheless, there are still debated questions such as:What are the source regions of SSW? What are their contributions to the SSW?Which is the role of the magnetic topology in corona for the origin, acceleration and energy deposition of SSW?Which are the possible acceleration and heating mechanisms for the SSW?The aim of this study is to present the insights on the SSW origin and formationarisen during the discussions at the International Space Science Institute (ISSI) by the Team entitled ''Slowsolar wind sources and acceleration mechanisms in the corona'' held in Bern (Switzerland) in March2014--2015. The attached figure will be presented to summarize the different hypotheses of the SSW formation.
Quan, Lulin; Yang, Zhixin
2010-05-01
To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures
DEFF Research Database (Denmark)
Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo
2012-01-01
Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat...
Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas
2017-08-01
The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
On the use of remotely sensed data to constrain process modeling and ecological forecasting
Serbin, S.; Greenberg, J. A.
2014-12-01
The ability to seamlessly integrate information on vegetation structure, function, and dynamics across a continuum of scales, from the field to satellite observations, greatly enhances our ability to understand how terrestrial vegetation-atmosphere interactions change over time and in response to disturbances and global change. For example, ecosystem process models require detailed information on the state (e.g. structure, leaf area index), surface properties (e.g. albedo), and dynamics (e.g. phenology, succession) of ecosystems in order to properly simulate the fluxes of carbon (C), water, and energy from the land to the atmosphere as well as address the vulnerability of ecosystems to environmental, pest and pathogen, and other anthropogenic perturbations. Other activities such as species distribution and environmental niche modeling (SDENM) require not only presence/absence information but also detailed spatial and temporal datasets, including climate and remotely sensed observations, to accurately project species ranges under current and often future climatic scenarios. Despite the many challenges of adequately initializing and parameterizing models, the last several decades have shown a substantial increase in the amount of available data useful for improving ecological predictions. Specifically remote sensing data provides an important data constraint for projecting species and successional changes as well as vegetation dynamics and the fluxes of C, water and energy and the storage of C in ecosystems, principally as a synoptic observational dataset for capturing short- to long-term plant-climate interactions. In this talk we will highlight the current and potential uses of various remotely sensed data sources in constraining process modeling and SDENM activities. We will pay particular attention to the uses of remote sensing as a direct constraint on ecological forecasts or a key observational dataset used to capture plant-climate interactions
Franz, Silvio; Gradenigo, Giacomo; Spigler, Stefano
2016-03-01
We study how the thermodynamic properties of the triangular plaquette model (TPM) are influenced by the addition of extra interactions. The thermodynamics of the original TPM is trivial, while its dynamics is glassy, as usual in kinetically constrained models. As soon as we generalize the model to include additional interactions, a thermodynamic phase transition appears in the system. The additional interactions we consider are either short ranged, forming a regular lattice in the plane, or long ranged of the small-world kind. In the case of long-range interactions we call the new model the random-diluted TPM. We provide arguments that the model so modified should undergo a thermodynamic phase transition, and that in the long-range case this is a glass transition of the "random first-order" kind. Finally, we give support to our conjectures studying the finite-temperature phase diagram of the random-diluted TPM in the Bethe approximation. This corresponds to the exact calculation on the random regular graph, where free energy and configurational entropy can be computed by means of the cavity equations.
Lifetime of dynamic heterogeneity in strong and fragile kinetically constrained spin models
International Nuclear Information System (INIS)
Leonard, Sebastien; Berthier, Ludovic
2005-01-01
Kinetically constrained spin models are schematic coarse-grained models for the glass transition which represent an efficient theoretical tool to study detailed spatio-temporal aspects of dynamic heterogeneity in supercooled liquids. Here, we study how spatially correlated dynamic domains evolve with time and compare our results to various experimental and numerical investigations. We find that strong and fragile models yield different results. In particular, the lifetime of dynamic heterogeneity remains constant and roughly equal to the alpha relaxation time in strong models, while it increases more rapidly in fragile models when the glass transition is approached
Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong
2017-07-01
In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fuzzy chance constrained linear programming model for scrap charge optimization in steel production
DEFF Research Database (Denmark)
Rong, Aiying; Lahdelma, Risto
2008-01-01
the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product......, the crisp equivalent of the fuzzy constraints should be less relaxed than that purely based on the concept of soft constraints. Based on the application context we adopt a strengthened version of soft constraints to interpret fuzzy constraints and form a crisp model with consistent and compact constraints...... for solution. Simulation results based on realistic data show that the failure risk can be managed by proper combination of aspiration levels and confidence factors for defining fuzzy numbers. There is a tradeoff between failure risk and material cost. The presented approach applies also for other scrap...
A METHOD TO CONSTRAIN MASS AND SPIN OF GRB BLACK HOLES WITHIN THE NDAF MODEL
Energy Technology Data Exchange (ETDEWEB)
Liu, Tong; Xue, Li [Department of Astronomy, Xiamen University, Xiamen, Fujian 361005 (China); Zhao, Xiao-Hong; Zhang, Fu-Wen [Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming, Yunnan 650011 (China); Zhang, Bing, E-mail: lixue@xmu.edu.cn, E-mail: tongliu@xmu.edu.cn, E-mail: zhang@physics.unlv.edu [Department of Physics and Astronomy, University of Nevada, Las Vegas, NV 89154 (United States)
2016-04-20
Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, i.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermally dominant GRB 101219B, whose initial jet launching radius, r {sub 0}, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass M {sub BH} ∼ 5–9 M {sub ⊙}, spin parameter a {sub *} ≳ 0.6, and disk mass 3 M {sub ⊙} ≲ M {sub disk} ≲ 4 M {sub ⊙}. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.
Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping
2018-01-01
An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.
International Nuclear Information System (INIS)
Nam, Young Woo; Yoon, Yong Tae; Park, Jong-Keun; Hur, Don; Kim, Sung-Soo
2006-01-01
The electricity markets with only few large firms are often vulnerable to less competitive behaviors than the desired. The presence of transmission constraints further restrict the competition among firms and provide more opportunities for firms to exercise market power. While it is generally acknowledged that the long-term contracts provide good measures for mitigating market power in the spot market (thus reducing undesired price spikes), it is not even more clear how effective these contracts are if the market is severely limited due to transmission constraints. In this paper, an analytical approach through finding a Nash equilibrium is presented to investigate the effects of long-term contracts on firms exercising market power in a bid-based pool with transmission constraints. Surprisingly the analysis in this paper shows that the presence of long-term contracts may result in the reduced expected social welfare. A straightforward consequence of the analysis presented in this paper will be helpful for the regulators in Korea to reconsider offering vesting contracts to generating companies in the near future. (author)
International Nuclear Information System (INIS)
Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J
2011-01-01
In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5–4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data
A two-stage fuzzy chance-constrained water management model.
Xu, Jiaxuan; Huang, Guohe; Li, Zoe; Chen, Jiapei
2017-05-01
In this study, an inexact two-stage fuzzy gradient chance-constrained programming (ITSFGP) method is developed and applied to the water resources management in the Heshui River Basin, Jiangxi Province, China. The optimization model is established by incorporating interval programming, two-stage stochastic programming, and fuzzy gradient chance-constrained programming within an optimization framework. The hybrid model can address uncertainties represented as fuzzy sets, probability distributions, and interval numbers. It can effectively tackle the interactions between pre-regulated economic targets and the associated environmental penalties attributed to water allocation schemes and reflect the tradeoffs between economic revenues and system-failure risk. Furthermore, uncertainties associated with the decision makers' preferences are considered in decision-making processes. The obtained results can provide decision support for the local sustainable economic development and water resources allocation strategies under multiple uncertainties.
Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform
Gato-Rivera, Beatriz
1992-01-01
A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.
Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors
Mikel Uriarte Itzazelaia; Jasone Astorga; Eduardo Jacob; Maider Huarte; Pedro Romaña
2018-01-01
Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control model...
Constraining Marsh Carbon Budgets Using Long-Term C Burial and Contemporary Atmospheric CO2 Fluxes
Forbrich, I.; Giblin, A. E.; Hopkinson, C. S.
2018-03-01
Salt marshes are sinks for atmospheric carbon dioxide that respond to environmental changes related to sea level rise and climate. Here we assess how climatic variations affect marsh-atmosphere exchange of carbon dioxide in the short term and compare it to long-term burial rates based on radiometric dating. The 5 years of atmospheric measurements show a strong interannual variation in atmospheric carbon exchange, varying from -104 to -233 g C m-2 a-1 with a mean of -179 ± 32 g C m-2 a-1. Variation in these annual sums was best explained by differences in rainfall early in the growing season. In the two years with below average rainfall in June, both net uptake and Normalized Difference Vegetation Index were less than in the other three years. Measurements in 2016 and 2017 suggest that the mechanism behind this variability may be rainfall decreasing soil salinity which has been shown to strongly control productivity. The net ecosystem carbon balance was determined as burial rate from four sediment cores using radiometric dating and was lower than the net uptake measured by eddy covariance (mean: 110 ± 13 g C m-2 a-1). The difference between these estimates was significant and may be because the atmospheric measurements do not capture lateral carbon fluxes due to tidal exchange. Overall, it was smaller than values reported in the literature for lateral fluxes and highlights the importance of investigating lateral C fluxes in future studies.
Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models
Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.
2017-06-01
The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.
Network-constrained Cournot models of liberalized electricity markets: the devil is in the details
International Nuclear Information System (INIS)
Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.
2005-01-01
Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)
Olusanya, Bolajoko O; Ogunlesi, Tinuade A; Kumar, Praveen; Boo, Nem-Yun; Iskander, Iman F; de Almeida, Maria Fernanda B; Vaucher, Yvonne E; Slusher, Tina M
2015-04-12
Hyperbilirubinaemia is a ubiquitous transitional morbidity in the vast majority of newborns and a leading cause of hospitalisation in the first week of life worldwide. While timely and effective phototherapy and exchange transfusion are well proven treatments for severe neonatal hyperbilirubinaemia, inappropriate or ineffective treatment of hyperbilirubinaemia, at secondary and tertiary hospitals, still prevails in many poorly-resourced countries accounting for a disproportionately high burden of bilirubin-induced mortality and long-term morbidity. As part of the efforts to curtail the widely reported risks of frequent but avoidable bilirubin-induced neurologic dysfunction (acute bilirubin encephalopathy (ABE) and kernicterus) in low and middle-income countries (LMICs) with significant resource constraints, this article presents a practical framework for the management of late-preterm and term infants (≥ 35 weeks of gestation) with clinically significant hyperbilirubinaemia in these countries particularly where local practice guidelines are lacking. Standard and validated protocols were followed in adapting available evidence-based national guidelines on the management of hyperbilirubinaemia through a collaboration among clinicians and experts on newborn jaundice from different world regions. Tasks and resources required for the comprehensive management of infants with or at risk of severe hyperbilirubinaemia at all levels of healthcare delivery are proposed, covering primary prevention, early detection, diagnosis, monitoring, treatment, and follow-up. Additionally, actionable treatment or referral levels for phototherapy and exchange transfusion are proposed within the context of several confounding factors such as widespread exclusive breastfeeding, infections, blood group incompatibilities and G6PD deficiency, which place infants at high risk of severe hyperbilirubinaemia and bilirubin-induced neurologic dysfunction in LMICs, as well as the limited facilities
Dong, Yao-Jun
2017-10-29
Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.
Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model
Energy Technology Data Exchange (ETDEWEB)
Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)
2016-04-15
The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)
Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems
Directory of Open Access Journals (Sweden)
Man Hong
2013-01-01
Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.
The global economic long-term potential of modern biomass in a climate-constrained world
Klein, David; Humpenöder, Florian; Bauer, Nico; Dietrich, Jan Philipp; Popp, Alexander; Bodirsky, Benjamin Leon; Bonsch, Markus; Lotze-Campen, Hermann
2014-07-01
Low-stabilization scenarios consistent with the 2 °C target project large-scale deployment of purpose-grown lignocellulosic biomass. In case a GHG price regime integrates emissions from energy conversion and from land-use/land-use change, the strong demand for bioenergy and the pricing of terrestrial emissions are likely to coincide. We explore the global potential of purpose-grown lignocellulosic biomass and ask the question how the supply prices of biomass depend on prices for greenhouse gas (GHG) emissions from the land-use sector. Using the spatially explicit global land-use optimization model MAgPIE, we construct bioenergy supply curves for ten world regions and a global aggregate in two scenarios, with and without a GHG tax. We find that the implementation of GHG taxes is crucial for the slope of the supply function and the GHG emissions from the land-use sector. Global supply prices start at 5 GJ-1 and increase almost linearly, doubling at 150 EJ (in 2055 and 2095). The GHG tax increases bioenergy prices by 5 GJ-1 in 2055 and by 10 GJ-1 in 2095, since it effectively stops deforestation and thus excludes large amounts of high-productivity land. Prices additionally increase due to costs for N2O emissions from fertilizer use. The GHG tax decreases global land-use change emissions by one-third. However, the carbon emissions due to bioenergy production increase by more than 50% from conversion of land that is not under emission control. Average yields required to produce 240 EJ in 2095 are roughly 600 GJ ha-1 yr-1 with and without tax.
The global economic long-term potential of modern biomass in a climate-constrained world
International Nuclear Information System (INIS)
Klein, David; Humpenöder, Florian; Bauer, Nico; Dietrich, Jan Philipp; Popp, Alexander; Leon Bodirsky, Benjamin; Bonsch, Markus; Lotze-Campen, Hermann
2014-01-01
Low-stabilization scenarios consistent with the 2 °C target project large-scale deployment of purpose-grown lignocellulosic biomass. In case a GHG price regime integrates emissions from energy conversion and from land-use/land-use change, the strong demand for bioenergy and the pricing of terrestrial emissions are likely to coincide. We explore the global potential of purpose-grown lignocellulosic biomass and ask the question how the supply prices of biomass depend on prices for greenhouse gas (GHG) emissions from the land-use sector. Using the spatially explicit global land-use optimization model MAgPIE, we construct bioenergy supply curves for ten world regions and a global aggregate in two scenarios, with and without a GHG tax. We find that the implementation of GHG taxes is crucial for the slope of the supply function and the GHG emissions from the land-use sector. Global supply prices start at $5 GJ −1 and increase almost linearly, doubling at 150 EJ (in 2055 and 2095). The GHG tax increases bioenergy prices by $5 GJ −1 in 2055 and by $10 GJ −1 in 2095, since it effectively stops deforestation and thus excludes large amounts of high-productivity land. Prices additionally increase due to costs for N 2 O emissions from fertilizer use. The GHG tax decreases global land-use change emissions by one-third. However, the carbon emissions due to bioenergy production increase by more than 50% from conversion of land that is not under emission control. Average yields required to produce 240 EJ in 2095 are roughly 600 GJ ha −1 yr −1 with and without tax. (letter)
A distance constrained synaptic plasticity model of C. elegans neuronal network
Badhwar, Rahul; Bagler, Ganesh
2017-03-01
Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.
Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.
2017-12-01
Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2016-01-01
Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.
Combining Multi-Sensor Measurements and Models to Constrain Time-Varying Aerosol Fire Emissions
Cohen, J. B.
2013-12-01
A significant portion of global Black Carbon (BC) and Organic Carbon (OC) aerosols are emitted into the atmosphere due to fires. However, due to their spatially and temporally heterogeneous nature, quantifying these emissions has proven to be difficult. Some of the problems stem from variability over multiple spatial and temporal scales: ranging from kilometers in size to thousands of kilometers in impact, and from month-to-month variations in the burning season to interannual variation in overall fire strength which follows such global phenomena as El-Nino. Yet, because of the unique absorbing properties that these aerosols have, they leave a distinct impact on the regional and global climate system, as well as the ability to intensely impact human health in downwind areas, proper quantification of the emissions is absolutely essential. To achieve such a critical understanding of their emissions in space and time, a start-of-the art modelling system of their chemical and physical processing, transport, and removal is implemented. This system is capable of effectively and uniquely simulating many impacts important in the atmosphere, including: enhanced absorption associated with internal mixing, mass and number conservation, the direct and semi-direct effects on atmospheric dynamics and circulation, and appropriate non-linear consideration of urban-scale chemical and physical processing. This modelling system has been used in connection with 3 separate sources of data, to achieve an end product that is heavily dependent on both. First of all, the model has been run in a data-assimilation mode to constrain the annual-average emissions of BC using the Kalman Filter technique. This global constraint, the first of its type, relies heavily on ground-based sensors from NASA as well as other organizations. Secondly, data of the decadal-scale variation in aerosol optical depth, surface reflectance, and radiative power have been obtained from the MODIS and MISR sensors
Input-constrained model predictive control via the alternating direction method of multipliers
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.
2014-01-01
is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation......This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP...
Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors
Directory of Open Access Journals (Sweden)
Mikel Uriarte Itzazelaia
2018-02-01
Full Text Available Upcoming smart scenarios enabled by the Internet of Things (IoT envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.
Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.
Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro
2018-02-13
Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.
Kim, Young-Hoo; Park, Jang-Won; Kim, Jun-Shik; Oh, Hyun-Keun
2015-10-01
The purpose of this study was to determine long-term clinical and radiographic results. One hundred and ninety-four patients (228 knees) underwent revision TKA with use of a constrained condylar knee prosthesis. The mean duration of follow-up was 14.6 years (range, 11 to 16 years). The mean pre-revision Knee Society knee scores (43.5 points) and function scores (47.0 points), and Western Ontario and McMaster Universities Osteoarthritis index scores (88 points) were improved significantly (P=0.002) to 85.6, 68.5, and 25 points, respectively, at 14.6 years follow-up. Eighteen knees (8%) had re-revision. Four knees were re-revised for infection. Kaplan-Meier survivorship analysis revealed that the 16-year rate of survival of the components was 94.7% as the end point of loosening and 92% as the end point of revision. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhang, Runtong; Chen, Donghua; Shang, Xiaopu; Zhu, Xiaomin; Liu, Kecheng
2017-04-24
Current access control mechanisms of the hospital information system can hardly identify the real access intention of system users. A relaxed access control increases the risk of compromise of patient privacy. To reduce unnecessary access of patient information by hospital staff, this paper proposes a Knowledge-Constrained Role-Based Access Control (KC-RBAC) model in which a variety of medical domain knowledge is considered in access control. Based on the proposed Purpose Tree and knowledge-involved algorithms, the model can dynamically define the boundary of access to the patient information according to the context, which helps protect patient privacy by controlling access. Compared with the Role-Based Access Control model, KC-RBAC can effectively protect patient information according to the results of the experiments.
Chance-constrained programming models for capital budgeting with NPV as fuzzy parameters
Huang, Xiaoxia
2007-01-01
In an uncertain economic environment, experts' knowledge about outlays and cash inflows of available projects consists of much vagueness instead of randomness. Investment outlays and annual net cash flows of a project are usually predicted by using experts' knowledge. Fuzzy variables can overcome the difficulties in predicting these parameters. In this paper, capital budgeting problem with fuzzy investment outlays and fuzzy annual net cash flows is studied based on credibility measure. Net present value (NPV) method is employed, and two fuzzy chance-constrained programming models for capital budgeting problem are provided. A fuzzy simulation-based genetic algorithm is provided for solving the proposed model problems. Two numerical examples are also presented to illustrate the modelling idea and the effectiveness of the proposed algorithm.
The Balance-of-Payments-Constrained Growth Model and the Limits to Export-Led Growth
Directory of Open Access Journals (Sweden)
Robert A. Blecker
2000-12-01
Full Text Available This paper discusses how A. P. Thirlwall's model of balance-of-payments-constrained growth can be adapted to analyze the idea of a "fallacy of composition" in the export-led growth strategy of many developing countries. The Deaton-Muellbauer model of the Almost Ideal Demand System (AIDS is used to represent the adding-up constraints on individual countries' exports, when they are all trying to export competing products to the same foreign markets (i.e. newly industrializing countries are exporting similar types of manufactured goods to the OECD countries. The relevance of the model to the recent financial crises in developing countries and policy alternatives for redirecting development strategies are also discussed.
A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems
Directory of Open Access Journals (Sweden)
Sitek Pawel
2014-08-01
Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.
A Method to Constrain Genome-Scale Models with 13C Labeling Data.
Directory of Open Access Journals (Sweden)
Héctor García Martín
2015-09-01
Full Text Available Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA. This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems.
Efficient non-negative constrained model-based inversion in optoacoustic tomography
Ding, Lu; Luís Deán-Ben, X.; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis
2015-09-01
The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.
Relativistic Disc Line: A Tool to Constrain Neutron Star Equation of State Models
Bhattacharyya, Sudip
2017-09-01
Relativistic iron Kα spectral emission line from the inner disc of a neutron star Low-Mass X-ray Binary (LMXB) was first detected in 2007. This discovery opened up new ways to probe strong gravity and dense matter. The past decade has seen detections of such a line from many neutron star LMXBs, and confirmation of this line from the same source with several X-ray satellites. These have firmly established the new field of relativistic disc line from neutron star systems in only a decade or so. Fitting the shape of such a line with an appropriate general relativistic model provides the accretion disc inner edge radius to the stellar mass ratio. In this review, we briefly discuss how an accurate measurement of this ratio with a future larger area X-ray instrument can be used to constrain neutron star equation of state models.
Directory of Open Access Journals (Sweden)
P. Gasparini
1997-06-01
Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.
Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation
Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill
2012-06-01
Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.
Lognonne, P.; Gudkova, T.; Le Feuvre, M.; Garcia, R. F.; Kawamura, T.; Banerdt, B.; Kobayashi, N.
2011-12-01
and model the differences of seismic propagation properties between Mars and the Moon, and use this modeling to estimate the seismic response of impacts on Mars, as a function of both the impactor characteristics (mass and velocity) and epicentral distance. We then use statistical models of impactors, confirmed by both the Apollo seismic observations and the Mars Orbiter impacts observations, to estimate the present flux on Mars and to constrain the rate of seismic impact detection, as well as the expected probability to further locale these events by differential remote sensing. This analysis is performed by taking into account both the expected performances of the VBB seismometer of GEMS and the expected environmental noise after its deployment on the Martian surface. The perspectives in terms of crustal and upper mantle seismic imaging are finally provided in conclusion for both GEMS on Mars and SELENE2 on the Moon.
Impact of the latest measurement of Hubble constant on constraining inflation models
Zhang, Xin
2017-06-01
We investigate how the constraint results of inflation models are affected by considering the latest local measurement of $H_0$ in the global fit. We use the observational data, including the Planck CMB full data, the BICEP2 and Keck Array CMB B-mode data, the BAO data, and the latest measurement of Hubble constant, to constrain the $\\Lambda$CDM+$r$+$N_{\\rm eff}$ model, and the obtained 1$\\sigma$ and 2$\\sigma$ contours of $(n_s, r)$ are compared to the theoretical predictions of selected inflationary models. We find that, in this fit, the scale invariance is only excluded at the 3.3$\\sigma$ level, and $\\Delta N_{\\rm eff}>0$ is favored at the 1.6$\\sigma$ level. The natural inflation model is now excluded at more than 2$\\sigma$ level; the Starobinsky $R^2$ model becomes only favored at around 2$\\sigma$ level; the most favored model becomes the spontaneously broken SUSY inflation model; and, the brane inflation model is also well consistent with the current data, in this case.
Benetos, Emmanouil; Dixon, Simon
2013-03-01
A method for automatic transcription of polyphonic music is proposed in this work that models the temporal evolution of musical tones. The model extends the shift-invariant probabilistic latent component analysis method by supporting the use of spectral templates that correspond to sound states such as attack, sustain, and decay. The order of these templates is controlled using hidden Markov model-based temporal constraints. In addition, the model can exploit multiple templates per pitch and instrument source. The shift-invariant aspect of the model makes it suitable for music signals that exhibit frequency modulations or tuning changes. Pitch-wise hidden Markov models are also utilized in a postprocessing step for note tracking. For training, sound state templates were extracted for various orchestral instruments using isolated note samples. The proposed transcription system was tested on multiple-instrument recordings from various datasets. Experimental results show that the proposed model is superior to a non-temporally constrained model and also outperforms various state-of-the-art transcription systems for the same experiment.
Constraining the dark energy models with H (z ) data: An approach independent of H0
Anagnostopoulos, Fotios K.; Basilakos, Spyros
2018-03-01
We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.
A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery
Directory of Open Access Journals (Sweden)
WANG Shaoyu
2016-12-01
Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.
Directory of Open Access Journals (Sweden)
Yin Wang
2014-01-01
Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.
Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization
Reyes, Juan Carlos De los
2013-11-01
We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.
Numerical modeling of a remote Himalayan glacier constrained by satellite data
Scherler, Dirk; Farinotti, Daniel; Anderson, Robert S.; Strecker, Manfred R.
2010-05-01
Himalayan glaciers are amongst the least studied and understood glaciers on Earth, yet their future behavior and water budget is crucial for densely-populated areas in south, central, and eastern Asia. Here, we use remotely-sensed glacier-surface velocities from a glacier in the upper Tons valley of western Uttaranchal in the Indian Himalaya, to evaluate the results from a numerical glacier model. We estimate present-day ice thickness distribution from surface topography with an approach based on mass conservation and principles of ice-flow dynamics. The numerical glacier model is based on the shallow ice approximation, and requires a mass-balance profile and glacier-bedrock topography as input. Optical-satellite imagery is used for mapping glacier extents and deriving glacier-surface velocities from cross correlation of multi-temporal ASTER and SPOT images. Modeling results, employing a mass balance profile from nearby monitored glaciers with a better data base, indicate good agreement between observed and modeled glacier extents and surface velocities. Discrepancies between model and observation in the lower part of the glacier are likely related to (1) poorly constrained effects of debris cover, and (2) the present disequilibrium and down-wasting of the glacier. Our technique will be useful for comparative analyses of glacial behavior worldwide as most data for our study has been obtained from analysis of remote-sensing data, virtually available for any region on Earth.
A dispersal-constrained habitat suitability model for predicting invasion of alpine vegetation.
Williams, Nicholas S G; Hahs, Amy K; Morgan, John W
2008-03-01
Developing tools to predict the location of new biological invasions is essential if exotic species are to be controlled before they become widespread. Currently, alpine areas in Australia are largely free of exotic plant species but face increasing pressure from invasive species due to global warming and intensified human use. To predict the potential spread of highly invasive orange hawkweed (Hieracium aurantiacum) from existing founder populations on the Bogong High Plains in southern Australia, we developed an expert-based, spatially explicit, dispersal-constrained, habitat suitability model. The model combines a habitat suitability index, developed from disturbance, site wetness, and vegetation community parameters, with a phenomenological dispersal kernel that uses wind direction and observed dispersal distances. After generating risk maps that defined the relative suitability of H. aurantiacum establishment across the study area, we intensively searched several locations to evaluate the model. The highest relative suitability for H. aurantiacum establishment was southeast from the initial infestations. Native tussock grasslands and disturbed areas had high suitability for H. aurantiacum establishment. Extensive field searches failed to detect new populations. Time-step evaluation using the location of populations known in 1998-2000, accurately assigned high relative suitability for locations where H. aurantiacum had established post-2003 (AUC [area under curve] = 0.855 +/- 0.035). This suggests our model has good predictive power and will improve the ability to detect populations and prioritize areas for ongoing monitoring.
Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial
Directory of Open Access Journals (Sweden)
A. Robinson
2011-04-01
Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ^{18}O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.
Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.
Starcevic, Vladan
2017-06-01
Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.
An Anatomically Constrained Model for Path Integration in the Bee Brain.
Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley
2017-10-23
Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.
Collatz, G. J.; Kawa, S. R.; Liu, Y.; Ivanoff, A.
2012-12-01
Terrestrial net carbon fluxes play a dominant role in the seasonality, interannual variability and long term accumulation of CO2 in the atmosphere. The expansion of atmospheric CO2 measurements, including those from satellite based observations, should provide strong constraints on process models that attempt to explain these observed variabilities. Here we evaluate the ability of the current surface co2 observation network to distinguish between different model formulations and we identify the locations and timing of CO2 observations needed to resolve important carbon cycle processes. The standard CASA-GFEDv3 terrestrial carbon flux model is driven by NDVI and MERRA meteorology, and CO2 is distributed in the atmosphere using transport from MERRA. The standard model is then modified to include lags in the seasonal cycle of gross fluxes, different magnitudes of gross fluxes, imposition of a global 2 PgC/yr carbon sink, and the absence of fire emissions. Comparisons of the predicted CO2 mixing ratios with observations show that the standard model does a good job at capturing the daily variability and seasonal cycles but not the observed interannual variability. Lagged gross fluxes and increased magnitude of the gross fluxes have large impacts on the CO2 seasonal cycle while the imposed net carbon sink is difficult to discern. Global fires are not detectible in the current surface observations network. Maps of modeled surface and column CO2 mixing ratio differences help to identify where, when, and at what precision and accuracy observations need to be made in order to constrain modeled processes.
Drake, J. E.; Darby, B. A.; Giasson, M.-A.; Kramer, M. A.; Phillips, R. P.; Finzi, A. C.
2013-02-01
Plant roots release a wide range of chemicals into soils. This process, termed root exudation, is thought to increase the activity of microbes and the exoenzymes they synthesize, leading to accelerated rates of carbon (C) mineralization and nutrient cycling in rhizosphere soils relative to bulk soils. The nitrogen (N) content of microbial biomass and exoenzymes may introduce a stoichiometric constraint on the ability of microbes to effectively utilize the root exudates, particularly if the exudates are rich in C but low in N. We combined a theoretical model of microbial activity with an exudation experiment to test the hypothesis that the ability of soil microbes to utilize root exudates for the synthesis of additional biomass and exoenzymes is constrained by N availability. The field experiment simulated exudation by automatically pumping solutions of chemicals often found in root exudates ("exudate mimics") containing C alone or C in combination with N (C : N ratio of 10) through microlysimeter "root simulators" into intact forest soils in two 50-day experiments. The delivery of C-only exudate mimics increased microbial respiration but had no effect on microbial biomass or exoenzyme activities. By contrast, experimental delivery of exudate mimics containing both C and N significantly increased microbial respiration, microbial biomass, and the activity of exoenzymes that decompose low molecular weight components of soil organic matter (SOM, e.g., cellulose, amino sugars), while decreasing the activity of exoenzymes that degrade high molecular weight SOM (e.g., polyphenols, lignin). The modeling results were consistent with the experiments; simulated delivery of C-only exudates induced microbial N-limitation, which constrained the synthesis of microbial biomass and exoenzymes. Exuding N as well as C alleviated this stoichiometric constraint in the model, allowing for increased exoenzyme production, the priming of decomposition, and a net release of N from SOM (i
Directory of Open Access Journals (Sweden)
J. E. Drake
2013-02-01
Full Text Available Plant roots release a wide range of chemicals into soils. This process, termed root exudation, is thought to increase the activity of microbes and the exoenzymes they synthesize, leading to accelerated rates of carbon (C mineralization and nutrient cycling in rhizosphere soils relative to bulk soils. The nitrogen (N content of microbial biomass and exoenzymes may introduce a stoichiometric constraint on the ability of microbes to effectively utilize the root exudates, particularly if the exudates are rich in C but low in N. We combined a theoretical model of microbial activity with an exudation experiment to test the hypothesis that the ability of soil microbes to utilize root exudates for the synthesis of additional biomass and exoenzymes is constrained by N availability. The field experiment simulated exudation by automatically pumping solutions of chemicals often found in root exudates ("exudate mimics" containing C alone or C in combination with N (C : N ratio of 10 through microlysimeter "root simulators" into intact forest soils in two 50-day experiments. The delivery of C-only exudate mimics increased microbial respiration but had no effect on microbial biomass or exoenzyme activities. By contrast, experimental delivery of exudate mimics containing both C and N significantly increased microbial respiration, microbial biomass, and the activity of exoenzymes that decompose low molecular weight components of soil organic matter (SOM, e.g., cellulose, amino sugars, while decreasing the activity of exoenzymes that degrade high molecular weight SOM (e.g., polyphenols, lignin. The modeling results were consistent with the experiments; simulated delivery of C-only exudates induced microbial N-limitation, which constrained the synthesis of microbial biomass and exoenzymes. Exuding N as well as C alleviated this stoichiometric constraint in the model, allowing for increased exoenzyme production, the priming of decomposition, and a net release of N
Constraining the top-Higgs sector of the standard model effective field theory
Cirigliano, V.; Dekens, W.; de Vries, J.; Mereghetti, E.
2016-08-01
Working in the framework of the Standard Model effective field theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization-group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy C P -conserving and C P -violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both C P -even and C P -odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model effective field theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.
Commitment Versus Persuasion in the Three-Party Constrained Voter Model
Mobilia, Mauro
2013-04-01
In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.
Root, Bart; Tarasov, Lev; van der Wal, Wouter
2014-05-01
The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.
Constraining Early Cenozoic exhumation of the British Isles with vertical profile modelling
Doepke, Daniel; Cogné, Nathan; Chew, David
2016-04-01
Despite decades of research is the Early Cenozoic exhumation history of Ireland and Britain still poorly understood and subject to contentious debate (e.g., Davis et al., 2012 and subsequent comments). One reason for this debate is the difficultly of constraining the evolution of onshore parts of the British Isles in both time and space. The paucity of Mesozoic and Cenozoic onshore outcrops makes direct analysis of this time span difficult. Furthermore, Ireland and Britain are situated at a passive margin, where the amount of post-rift exhumation is generally very low. Classical thermochronological tools are therefore near the edge of their resolution and make precise dating of post-rift cooling events challenging. In this study we used the established apatite fission track and (U-Th-Sm)/He techniques, but took advantage of the vertical profile approach of Gallagher et al. (2005) implemented in the QTQt modelling package (Gallagher, 2012), to better constrain the thermal histories. This method allowed us to define the geographical extent of a Late Cretaceous - Early Tertiary cooling event and to show that it was centered around the Irish Sea. Thus, we argue that this cooling event is linked to the underplating of hot material below the crust centered on the Irish Sea (Jones et al., 2002; Al-Kindi et al., 2003), and demonstrate that such conclusion would have been harder, if not impossible, to draw by modelling the samples individually without the use of the vertical profile approach. References Al-Kindi, S., White, N., Sinha, M., England, R., and Tiley, R., 2003, Crustal trace of a hot convective sheet: Geology, v. 31, no. 3, p. 207-210. Davis, M.W., White, N.J., Priestley, K.F., Baptie, B.J., and Tilmann, F.J., 2012, Crustal structure of the British Isles and its epeirogenic consequences: Geophysical Journal International, v. 190, no. 2, p. 705-725. Jones, S.M., White, N., Clarke, B.J., Rowley, E., and Gallagher, K., 2002, Present and past influence of the Iceland
Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven
2016-04-01
The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the
Constraining soil C cycling with strategic, adaptive action for data and model reporting
Harden, J. W.; Swanston, C.; Hugelius, G.
2015-12-01
Regional to global carbon assessments include a variety of models, data sets, and conceptual structures. This includes strategies for representing the role and capacity of soils to sequester, release, and store carbon. Traditionally, many soil carbon data sets emerged from agricultural missions focused on mapping and classifying soils to enhance and protect production of food and fiber. More recently, soil carbon assessments have allowed for more strategic measurement to address the functional and spatially explicit role that soils play in land-atmosphere carbon exchange. While soil data sets are increasingly inter-comparable and increasingly sampled to accommodate global assessments, soils remain poorly constrained or understood with regard to their role in spatio-temporal variations in carbon exchange. A more deliberate approach to rapid improvement in our understanding involves a community-based activity than embraces both a nimble data repository and a dynamic structure for prioritization. Data input and output can be transparent and retrievable as data-derived products, while also being subjected to rigorous queries for merging and harmonization into a searchable, comprehensive, transparent database. Meanwhile, adaptive action groups can prioritize data and modeling needs that emerge through workshops, meta-data analyses or model testing. Our continual renewal of priorities should address soil processes, mechanisms, and feedbacks that significantly influence global C budgets and/or significantly impact the needs and services of regional soil resources that are impacted by C management. In order to refine the International Soil Carbon Network, we welcome suggestions for such groups to be led on topics such as but not limited to manipulation experiments, extreme climate events, post-disaster C management, past climate-soil interactions, or water-soil-carbon linkages. We also welcome ideas for a business model that can foster and promote idea and data sharing.
CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand
Directory of Open Access Journals (Sweden)
Rajendra Shrestha
2013-04-01
Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.
Supporting the search for the CEP location with nonlocal PNJL models constrained by lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Contrera, Gustavo A. [IFLP, UNLP, CONICET, Facultad de Ciencias Exactas, La Plata (Argentina); Gravitation, Astrophysics and Cosmology Group, FCAyG, UNLP, La Plata (Argentina); CONICET, Buenos Aires (Argentina); Grunfeld, A.G. [CONICET, Buenos Aires (Argentina); Comision Nacional de Energia Atomica, Departamento de Fisica, Buenos Aires (Argentina); Blaschke, David [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Joint Institute for Nuclear Research, Moscow Region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation)
2016-08-15
We investigate the possible location of the critical endpoint in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift Δ P/T {sup 4} vs. T/T{sub c} with lattice QCD results shows a better agreement when Lorentzian form factors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study, albeit presently performed at the mean-field level, supports the very existence of a critical point and favors its location within a region that is accessible in experiments at the NICA accelerator complex. (orig.)
Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model
Daout, S.; Barbot, S.; Peltzer, G.; Doin, M.-P.; Liu, Z.; Jolivet, R.
2016-11-01
Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.
Directory of Open Access Journals (Sweden)
Qunyi Xie
2016-01-01
Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.
Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.
Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R
2016-11-16
Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.
Directory of Open Access Journals (Sweden)
J. P. Werner
2015-03-01
Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.
International Nuclear Information System (INIS)
Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua
2017-01-01
An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)
DEFF Research Database (Denmark)
Øjelund, Henrik; Sadegh, Payman
2000-01-01
be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....
Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.
Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F
2009-11-01
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.
International Nuclear Information System (INIS)
Alipour, Manijeh; Mohammadi-Ivatloo, Behnam; Zare, Kazem
2014-01-01
Highlights: • Short-term self-scheduling problem of customers with CHP units is conducted. • Power demand and pool prices are forecasted using ARIMA models. • Risk management problem is conducted by implementing CVaR methodology. • The demand response program is implemented in self-scheduling problem of CHP units. • Non-convex feasible operation region in different types of CHP units is modeled. - Abstract: This paper presents a stochastic programming framework for solving the scheduling problem faced by an industrial customer with cogeneration facilities, conventional power production system, and heat only units. The power and heat demands of the customer are supplied considering demand response (DR) programs. In the proposed DR program, the responsive load can vary in different time intervals. In the paper, the heat-power dual dependency characteristic in different types of CHP units is taken into account. In addition, a heat buffer tank, with the ability of heat storage, has been incorporated in the proposed framework. The impact of the market and load uncertainties on the scheduling problem is characterized through a stochastic programming formulation. Autoregressive integrated moving average (ARIMA) technique is used to generate the electricity price and the customer demand scenarios. The daily and weekly seasonalities of demand and market prices are taken into account in the scenario generation procedure. The conditional value-at-risk (CVaR) methodology is implemented in order to limit the risk of expected profit due to market price and load forecast volatilities
Time-constrained mother and expanding market: emerging model of under-nutrition in India
Directory of Open Access Journals (Sweden)
S. Chaturvedi
2016-07-01
Full Text Available Abstract Background Persistent high levels of under-nutrition in India despite economic growth continue to challenge political leadership and policy makers at the highest level. The present inductive enquiry was conducted to map the perceptions of mothers and other key stakeholders, to identify emerging drivers of childhood under-nutrition. Methods We conducted a multi-centric qualitative investigation in six empowered action group states of India. The study sample included 509 in-depth interviews with mothers of undernourished and normal nourished children, policy makers, district level managers, implementer and facilitators. Sixty six focus group discussions and 72 non-formal interactions were conducted in two rounds with primary caretakers of undernourished children, Anganwadi Workers and Auxiliary Nurse Midwives. Results Based on the perceptions of the mothers and other key stakeholders, a model evolved inductively showing core themes as drivers of under-nutrition. The most forceful emerging themes were: multitasking, time constrained mother with dwindling family support; fragile food security or seasonal food paucity; child targeted market with wide availability and consumption of ready-to-eat market food items; rising non-food expenditure, in the context of rising food prices; inadequate and inappropriate feeding; delayed recognition of under-nutrition and delayed care seeking; and inadequate responsiveness of health care system and Integrated Child Development Services (ICDS. The study emphasized that the persistence of child malnutrition in India is also tied closely to the high workload and consequent time constraint of mothers who are increasingly pursuing income generating activities and enrolled in paid labour force, without robust institutional support for childcare. Conclusion The emerging framework needs to be further tested through mixed and multiple method research approaches to quantify the contribution of time limitation of
Time-constrained mother and expanding market: emerging model of under-nutrition in India.
Chaturvedi, S; Ramji, S; Arora, N K; Rewal, S; Dasgupta, R; Deshmukh, V
2016-07-25
Persistent high levels of under-nutrition in India despite economic growth continue to challenge political leadership and policy makers at the highest level. The present inductive enquiry was conducted to map the perceptions of mothers and other key stakeholders, to identify emerging drivers of childhood under-nutrition. We conducted a multi-centric qualitative investigation in six empowered action group states of India. The study sample included 509 in-depth interviews with mothers of undernourished and normal nourished children, policy makers, district level managers, implementer and facilitators. Sixty six focus group discussions and 72 non-formal interactions were conducted in two rounds with primary caretakers of undernourished children, Anganwadi Workers and Auxiliary Nurse Midwives. Based on the perceptions of the mothers and other key stakeholders, a model evolved inductively showing core themes as drivers of under-nutrition. The most forceful emerging themes were: multitasking, time constrained mother with dwindling family support; fragile food security or seasonal food paucity; child targeted market with wide availability and consumption of ready-to-eat market food items; rising non-food expenditure, in the context of rising food prices; inadequate and inappropriate feeding; delayed recognition of under-nutrition and delayed care seeking; and inadequate responsiveness of health care system and Integrated Child Development Services (ICDS). The study emphasized that the persistence of child malnutrition in India is also tied closely to the high workload and consequent time constraint of mothers who are increasingly pursuing income generating activities and enrolled in paid labour force, without robust institutional support for childcare. The emerging framework needs to be further tested through mixed and multiple method research approaches to quantify the contribution of time limitation of the mother on the current burden of child under-nutrition.
International Nuclear Information System (INIS)
Liao, Ho-Tang; Chou, Charles C.-K.; Chow, Judith C.; Watson, John G.; Hopke, Philip K.; Wu, Chang-Fu
2015-01-01
This study was conducted to identify and quantify the sources of selected volatile organic compounds (VOCs) and fine particulate matter (PM 2.5 ) by using a partially constrained source apportionment model suitable for multiple time resolution data. Hourly VOC, 12-h and 24-h PM 2.5 speciation data were collected during three seasons in 2013. Eight factors were retrieved from the Positive Matrix Factorization solutions and adding source profile constraints enhanced the interpretability of source profiles. Results showed that the evaporative emission factor was the largest contributor (25%) to VOC mass concentration, while the largest contributor to PM 2.5 mass concentration was soil dust/regional transport related factor (26%). In terms of risk prioritization, traffic/industry related factor was the major cause for benzene, ethylbenzene, Cr, and polycyclic aromatic hydrocarbons (29–69%) while petrochemical related factor contributed most to the Ni risk (36%). This indicated that a larger contributor to mass concentration may not correspond to a higher risk. - Highlights: • We applied a partially constrained receptor model to multiple time resolution data. • Hourly VOC, 12-h and 24-h PM 2.5 speciation data were combined in the model. • Adding constraints to the model enhanced the interpretability of source profiles. • We applied a risk apportionment approach to obtain the source-specific risk values. • A larger contributor to mass concentration may not correspond to a higher risk. - Combining a constrained receptor model and a risk apportionment approach could provide valuable information to design effective control strategies
Models of Short-Term Synaptic Plasticity.
Barroso-Flores, Janet; Herrera-Valdez, Marco A; Galarraga, Elvira; Bargas, José
2017-01-01
We focus on dynamical descriptions of short-term synaptic plasticity. Instead of focusing on the molecular machinery that has been reviewed recently by several authors, we concentrate on the dynamics and functional significance of synaptic plasticity, and review some mathematical models that reproduce different properties of the dynamics of short term synaptic plasticity that have been observed experimentally. The complexity and shortcomings of these models point to the need of simple, yet physiologically meaningful models. We propose a simplified model to be tested in synapses displaying different types of short-term plasticity.
Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF
DEFF Research Database (Denmark)
Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J
2018-01-01
PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES...
Robust and Efficient Constrained DFT Molecular Dynamics Approach for Biochemical Modeling
Czech Academy of Sciences Publication Activity Database
Řezáč, Jan; Levy, B.; Demachy, I.; de la Lande, A.
2012-01-01
Roč. 8, č. 2 (2012), s. 418-427 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : constrained density functional theory * electron transfer * density fitting Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012
McKenna, Sean A.; Selroos, Jan-Olof
Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open
Rajaram, H.; Birdsell, D.; Lackey, G.; Karra, S.; Viswanathan, H. S.; Dempsey, D.
2015-12-01
The dramatic increase in the extraction of unconventional oil and gas resources using horizontal wells and hydraulic fracturing (fracking) technologies has raised concerns about potential environmental impacts. Large volumes of hydraulic fracturing fluids are injected during fracking. Incidents of stray gas occurrence in shallow aquifers overlying shale gas reservoirs have been reported; whether these are in any way related to fracking continues to be debated. Computational models serve as useful tools for evaluating potential environmental impacts. We present modeling studies of hydraulic fracturing fluid and gas migration during the various stages of well operation, production, and subsequent plugging. The fluid migration models account for overpressure in the gas reservoir, density contrast between injected fluids and brine, imbibition into partially saturated shale, and well operations. Our results highlight the importance of representing the different stages of well operation consistently. Most importantly, well suction and imbibition both play a significant role in limiting upward migration of injected fluids, even in the presence of permeable connecting pathways. In an overall assessment, our fluid migration simulations suggest very low risk to groundwater aquifers when the vertical separation from a shale gas reservoir is of the order of 1000' or more. Multi-phase models of gas migration were developed to couple flow and transport in compromised wellbores and subsurface formations. These models are useful for evaluating both short-term and long-term scenarios of stray methane release. We present simulation results to evaluate mechanisms controlling stray gas migration, and explore relationships between bradenhead pressures and the likelihood of methane release and transport.
Washington, K.; West, A. J.; Hartmann, J.; Amann, T.; Hosono, T.; Ide, K.
2017-12-01
While analyzing geochemical archives and carbon cycle modelling can further our understanding of the role of silicate weathering as a sink in the long-term carbon cycle, it is necessary to study modern weathering processes to inform these efforts. A recent compilation of data from rivers draining basaltic catchments estimates that rock weathering in active volcanic fields (AVFs) consumes atmospheric CO2 approximately three times faster than in inactive volcanic fields (IVFs), suggesting that the eruption and subsequent weathering of large igneous provinces likely played a major role in the carbon cycle in the geologic past [1]. The study demonstrates a significant correlation between catchment mean annual temperature (MAT) and atmospheric CO2 consumption rate for IVFs. However CO2 consumption due to weathering of AVFs is not correlated with MAT as the relationship is complicated by variability in hydrothermal fluxes, reactive surface area, and groundwater flow paths. To investigate the controls on weathering processes in AVFs, we present data for dissolved and solid weathering products from Mount Aso Caldera, Japan. Aso Caldera is an ideal site for studying the how the chemistry of rivers draining an AVF is impacted by high-temperature water/rock interactions, volcanic ash weathering, and varied groundwater flow paths and residence times. Samples were collected over five field seasons from two rivers and their tributaries, cold groundwater springs, and thermal springs. These samples capture the region's temperature and precipitation seasonality. Solid samples of unaltered volcanic rocks, hydrothermally-altered materials, volcanic ash, a soil profile, and suspended and bedload river sediments were also collected. The hydrochemistry of dissolved phases were analyzed at the University of Hamburg, while the mineralogy and geochemical compositions of solid phases were analyzed at the Natural History Museum of Los Angeles. This work will be discussed in the context of
Houborg, Rasmus
2013-07-01
In terrestrial biosphere models, key biochemical controls on carbon uptake by vegetation canopies are typically assigned fixed literature-based values for broad categories of vegetation types although in reality significant spatial and temporal variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based retrievals of leaf chlorophyll for estimating leaf photosynthetic capacity and for constraining model simulations of water and carbon fluxes. © 2013 IEEE.
Dudley-Javoroski, S; Petrie, M A; McHenry, C L; Amelon, R E; Saha, P K; Shields, R K
2016-03-01
This study examined the effect of a controlled dose of vibration upon bone density and architecture in people with spinal cord injury (who eventually develop severe osteoporosis). Very sensitive computed tomography (CT) imaging revealed no effect of vibration after 12 months, but other doses of vibration may still be useful to test. The purposes of this report were to determine the effect of a controlled dose of vibratory mechanical input upon individual trabecular bone regions in people with chronic spinal cord injury (SCI) and to examine the longitudinal bone architecture changes in both the acute and chronic state of SCI. Participants with SCI received unilateral vibration of the constrained lower limb segment while sitting in a wheelchair (0.6g, 30 Hz, 20 min, three times weekly). The opposite limb served as a control. Bone mineral density (BMD) and trabecular micro-architecture were measured with high-resolution multi-detector CT. For comparison, one participant was studied from the acute (0.14 year) to the chronic state (2.7 years). Twelve months of vibration training did not yield adaptations of BMD or trabecular micro-architecture for the distal tibia or the distal femur. BMD and trabecular network length continued to decline at several distal femur sub-regions, contrary to previous reports suggesting a "steady state" of bone in chronic SCI. In the participant followed from acute to chronic SCI, BMD and architecture decline varied systematically across different anatomical segments of the tibia and femur. This study supports that vibration training, using this study's dose parameters, is not an effective anti-osteoporosis intervention for people with chronic SCI. Using a high-spatial-resolution CT methodology and segmental analysis, we illustrate novel longitudinal changes in bone that occur after spinal cord injury.
Watson, K. A.; Masarik, M. T.; Flores, A. N.
2016-12-01
Mountainous, snow-dominated basins are often referred to as the water towers of the world because they store precipitation in seasonal snowpacks, which gradually melt and provide water supplies to downstream communities. Yet significant uncertainties remain in terms of quantifying the stores and fluxes of water in these regions as well as the associated energy exchanges. Constraining these stores and fluxes is crucial for advancing process understanding and managing these water resources in a changing climate. Remote sensing data are particularly important to these efforts due to the remoteness of these landscapes and high spatial variability in water budget components. We have developed a high resolution regional climate dataset extending from 1986 to the present for the Snake River Basin in the northwestern USA. The Snake River Basin is the largest tributary of the Columbia River by volume and a critically important basin for regional economies and communities. The core of the dataset was developed using a regional climate model, forced by reanalysis data. Specifically the Weather Research and Forecasting (WRF) model was used to dynamically downscale the North American Regional Reanalysis (NARR) over the region at 3 km horizontal resolution for the period of interest. A suite of satellite remote sensing products provide independent, albeit uncertain, constraint on a number of components of the water and energy budgets for the region across a range of spatial and temporal scales. For example, GRACE data are used to constrain basinwide terrestrial water storage and MODIS products are used to constrain the spatial and temporal evolution of evapotranspiration and snow cover. The joint use of both models and remote sensing products allows for both better understanding of water cycle dynamics and associated hydrometeorologic processes, and identification of limitations in both the remote sensing products and regional climate simulations.
Doyle, Jessica M.; Gleeson, Tom; Manning, Andrew H.; Mayer, K. Ulrich
2015-01-01
Environmental tracers provide information on groundwater age, recharge conditions, and flow processes which can be helpful for evaluating groundwater sustainability and vulnerability. Dissolved noble gas data have proven particularly useful in mountainous terrain because they can be used to determine recharge elevation. However, tracer-derived recharge elevations have not been utilized as calibration targets for numerical groundwater flow models. Herein, we constrain and calibrate a regional groundwater flow model with noble-gas-derived recharge elevations for the first time. Tritium and noble gas tracer results improved the site conceptual model by identifying a previously uncertain contribution of mountain block recharge from the Coast Mountains to an alluvial coastal aquifer in humid southwestern British Columbia. The revised conceptual model was integrated into a three-dimensional numerical groundwater flow model and calibrated to hydraulic head data in addition to recharge elevations estimated from noble gas recharge temperatures. Recharge elevations proved to be imperative for constraining hydraulic conductivity, recharge location, and bedrock geometry, and thus minimizing model nonuniqueness. Results indicate that 45% of recharge to the aquifer is mountain block recharge. A similar match between measured and modeled heads was achieved in a second numerical model that excludes the mountain block (no mountain block recharge), demonstrating that hydraulic head data alone are incapable of quantifying mountain block recharge. This result has significant implications for understanding and managing source water protection in recharge areas, potential effects of climate change, the overall water budget, and ultimately ensuring groundwater sustainability.
DEFF Research Database (Denmark)
Zhao, Yongning; Ye, Lin; Pinson, Pierre
2018-01-01
and forecasting, the original SC-VAR is modiﬁed and a Correlation-Constrained SC-VAR (CCSC-VAR) is proposed based on spatial correlation information about wind farms. Our approach is evaluated based on a case study of very-short-term forecasting for 25 wind farms in Denmark. Comparison is performed with a set...... of traditional local methods and spatio-temporal methods. The results obtained show the proposed CCSC-VAR has better overall performance than both the original SC-VAR and other benchmark methods, taking into account all evaluation indicators, including sparsitycontrol ability, sparsity, accuracy and efﬁciency...
Constraining Roche-Lobe Overflow Models Using the Hot-Subdwarf Wide Binary Population
Vos, Joris; Vučković, Maja
2017-12-01
One of the important issues regarding the final evolution of stars is the impact of binarity. A rich zoo of peculiar, evolved objects are born from the interaction between the loosely bound envelope of a giant, and the gravitational pull of a companion. However, binary interactions are not understood from first principles, and the theoretical models are subject to many assumptions. It is currently agreed upon that hot subdwarf stars can only be formed through binary interaction, either through common envelope ejection or stable Roche-lobe overflow (RLOF) near the tip of the red giant branch (RGB). These systems are therefore an ideal testing ground for binary interaction models. With our long term study of wide hot subdwarf (sdB) binaries we aim to improve our current understanding of stable RLOF on the RGB by comparing the results of binary population synthesis studies with the observed population. In this article we describe the current model and possible improvements, and which observables can be used to test different parts of the interaction model.
Fielding-Miller, Rebecca; Dunkle, Kristin
2017-12-01
Women who engage in transactional sex are more likely to experience intimate partner violence (IPV) and are at higher risk of HIV. However, women engage in transactional sex for a variety of reasons and the precise mechanism linking transactional sex and IPV is not fully understood. We conducted a behavioural survey with a cross-sectional sample of 401 women attending 1 rural and 1 urban public antenatal clinic in Swaziland between February and June 2014. We used structural equation modelling to identify and measure constrained relationship agency (CRA) as a latent variable, and then tested the hypothesis that CRA plays a significant role in the pathway between IPV and transactional sex. After controlling for CRA, receiving more material goods from a sexual partner was not associated with higher levels of physical or sexual IPV and was protective against emotional IPV. CRA was the single largest predictor of IPV, and more education was associated with decreased levels of constrained relationship agency. Policies and interventions that target transactional sex as a driver of IPV and HIV may be more successful if they instead target the broader social landscape that constrains women's agency and drives the harmful aspects of transactional sex.
Rybizki, Jan; Just, Andreas; Rix, Hans-Walter
2017-09-01
Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar
Gholami, A.; Siahkoohi, H. R.
2009-01-01
In this paper, a new approach is introduced to solve ill-posed linear inverse problems in geophysics. Our method combines classical quadratic regularization and data smoothing by imposing constraints on model and data smoothness simultaneously. When imposing a quadratic penalty term in the data space to control smoothness of the data predicted by classical zero-order regularization, the method leads to a direct regularization in standard form, which is simple to be implemented and ensures that the estimated model is smooth. In addition, by enforcing Tikhonov's predicted data to be sparse in a wavelet domain, the idea leads to an efficient regularization algorithm with two superior properties. First, the algorithm ensures the smoothness of the estimated model while substantially preserving the edges of it, so, it is well suited for recovering piecewise smooth/constant models. Second, parsimony of wavelets on the columns of the forward operator and existence of a fast wavelet transform algorithm provide an efficient sparse representation of the forward operator matrix. The reduced size of the forward operator makes the solution of large-scale problems straightforward, because during the inversion process, only sparse matrices need to be stored, which reduces the memory required. Additionally, all matrix-vector multiplications are carried out in sparse form, reducing CPU time. Applications on both synthetic and real 1-D seismic-velocity estimation experiments illustrate the idea. The performance of the method is compared with that of classical quadratic regularization, total-variation regularization and a two-step, wavelet-based, inversion method.
Alhadhrami, Fathiya Mohammed
This study examines the use of rock physics modeling for quantitative interpretation of seismic data in the context of microbial growth and biofilm formation in unconsolidated sediment. The impetus for this research comes from geophysical experiments by Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012). These studies observed that microbial growth has a small effect on P-wave velocities (VP) but a large effect on seismic amplitudes. Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012) speculated that the amplitude variations were due to a combination of rock mechanical changes from accumulation of microbial growth related features such as biofilms. A more definite conclusion can be drawn by developing rock physics models that connect rock properties to seismic amplitudes. The primary objective of this work is to provide an explanation for high amplitude attenuation due to biofilm growth. The results suggest that biofilm formation in the Davis et al. (2010) experiment exhibit two growth styles: a loadbearing style where biofilm behaves like an additional mineral grain and a non-loadbearing mode where the biofilm grows into the pore spaces. In the loadbearing mode, the biofilms contribute to the stiffness of the sediments. We refer to this style as "filler." In the non-loadbearing mode, the biofilms contribute only to change in density of sediments without affecting their strength. We refer to this style of microbial growth as "mushroom." Both growth styles appear to be changing permeability more than the moduli or the density. As the result, while the VP velocity remains relatively unchanged, the amplitudes can change significantly depending on biofilm saturation. Interpreting seismic data from biofilm growths in term of rock physics models provide a greater insight into the sediment-fluid interaction. The models in turn can be used to understand microbial enhanced oil recovery and in assisting in solving environmental issues such as creating bio
Zhu, Xinjun; Li, Jing; Thomas, John C; Song, Limei; Guo, Qinghua; Shen, Jin
2017-07-01
In particle size measurement using dynamic light scattering (DLS), noise makes the estimation of the particle size distribution (PSD) from the autocorrelation function data unreliable, and a regularization technique is usually required to estimate a reasonable PSD. In this paper, we propose an Lp-norm-residual constrained regularization model for the estimation of the PSD from DLS data based on the Lp norm of the fitting residual. Our model is a generalization of the existing, commonly used L2-norm-residual-based regularization methods such as CONTIN and constrained Tikhonov regularization. The estimation of PSDs by the proposed model, using different Lp norms of the fitting residual for p=1, 2, 10, and ∞, is studied and their performance is determined using simulated and experimental data. Results show that our proposed model with p=1 is less sensitive to noise and improves stability and accuracy in the estimation of PSDs for unimodal and bimodal systems. The model with p=1 is particularly applicable to the noisy or bimodal PSD cases.
Lundgren, Paul; Nikkhoo, Mehdi; Samsonov, Sergey V.; Milillo, Pietro; Gil-Cruz, Fernando; Lazo, Jonathan
2017-07-01
Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentina border in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue's source models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformation observations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span the entire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively, slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending track time series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate of volume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at 2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting the shallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonic seismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time series also show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations for right-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera for both RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion within the caldera are caused by the modeled sources. Together, the InSAR-constrained source model and the seismicity suggest a deep conduit or transfer zone where magma moves from the central caldera to Copahue's upper edifice.
Quadratic Term Structure Models in Discrete Time
Marco Realdon
2006-01-01
This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...
Directory of Open Access Journals (Sweden)
Ingrid Paine
2016-04-01
Full Text Available Mathematics is often used to model biological systems. In mammary gland development, mathematical modeling has been limited to acinar and branching morphogenesis and breast cancer, without reference to normal duct formation. We present a model of ductal elongation that exploits the geometrically-constrained shape of the terminal end bud (TEB, the growing tip of the duct, and incorporates morphometrics, region-specific proliferation and apoptosis rates. Iterative model refinement and behavior analysis, compared with biological data, indicated that the traditional metric of nipple to the ductal front distance, or percent fat pad filled to evaluate ductal elongation rate can be misleading, as it disregards branching events that can reduce its magnitude. Further, model driven investigations of the fates of specific TEB cell types confirmed migration of cap cells into the body cell layer, but showed their subsequent preferential elimination by apoptosis, thus minimizing their contribution to the luminal lineage and the mature duct.
Niri, Mohammad Emami; Lumley, David E.
2017-10-01
Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.
De Martino, Daniele
2017-12-01
In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.
Murine model of long term obstructive jaundice
Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A.; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki
2016-01-01
Background With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of 3 murine models of obstructive jaundice. Methods C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Results 70% (7/10) of tCL mice died by Day 7, whereas majority 67% (10/15) of pCL mice survived with loss of jaundice. 19% (3/16) of LMHL mice died; however, jaundice continued beyond Day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 days after ligation but jaundice rapidly decreased by Day 7. The LHML group developed portal hypertension as well as severe fibrosis by Day 14 in addition to prolonged jaundice. Conclusion The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice but long term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. PMID:27916350
Pearce, A R; Rastetter, E B; Kwiatkowski, B L; Bowden, W B; Mack, M C; Jiang, Y
2015-07-01
Abstract. We calibrated the Multiple Element Limitation (MEL) model to Alaskan arctic tundra to simulate recovery of thermal erosion features (TEFs) caused by permafrost thaw and mass wasting. TEFs could significantly alter regional carbon (C) and nutrient budgets because permafrost soils contain large stocks of soil organic matter (SOM) and TEFs are expected to become more frequent as the climate warms. We simulated recovery following TEF stabilization and did not address initial, short-term losses of C and nutrients during TEF formation. To capture the variability among and within TEFs, we modeled a range of post-stabilization conditions by varying the initial size of SOM stocks and nutrient supply rates. Simulations indicate that nitrogen (N) losses after the TEF stabilizes are small, but phosphorus (P) losses continue. Vegetation biomass recovered 90% of its undisturbed C, N, and P stocks in 100 years using nutrients mineralized from SOM. Because of low litter inputs but continued decomposition, younger SOM continued to be lost for 10 years after the TEF began to recover, but recovered to about 84% of its undisturbed amount in 100 years. The older recalcitrant SOM in mineral soil continued to be lost throughout the 100-year simulation. Simulations suggest that biomass recovery depended on the amount of SOM remaining after disturbance. Recovery was initially limited by the photosynthetic capacity of vegetation but became co-limited by N and P once a plant canopy developed. Biomass and SOM recovery was enhanced by increasing nutrient supplies, but the magnitude, source, and controls on these supplies are poorly understood. Faster mineralization of nutrients from SOM (e.g., by warming) enhanced vegetation recovery but delayed recovery of SOM. Taken together, these results suggest that although vegetation and surface SOM on TEFs recovered quickly (25 and 100 years, respectively), the recovery of deep, mineral soil SOM took centuries and represented a major
Directory of Open Access Journals (Sweden)
A. Frepoli
1997-06-01
Full Text Available We computed one-dimensional ( I D velocity models and station corrections for Centrai and Southern Italy, in- verting re-picked P-wave alTival times recorded by the Istituto Nazionale di Geofisica seismic network. The re-picked data yield resolved P-wave velocity results and proved to be more suited than bulletin data for de- tailed tomographic studies. Using the improved velocity models, we relocated the most significant earthquakes which occurt.ed in the Apennines in the past 7 years, achieving constrained hypocentral determinations for events within most of the Apenninic belt. The interpretation of the obtained lD velocity models allows us to infer interesting features on the deep structure of the Apennines. Smooth velocity gradients with depth and low P-wave velocities are ob,'ierved beneath the Apennines. We believe that our results are effective to constrain hypocentral locations in Italy and may represent a first step towards more detailed seismotectonic analyses.
International Nuclear Information System (INIS)
Ayres, Fabio J.; Rangayyan, Rangaraj M.
2007-01-01
Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)
Constraining the Physics of AM Canum Venaticorum Systems with the Accretion Disk Instability Model
Cannizzo, John K.; Nelemans, Gijs
2015-01-01
Recent work by Levitan et al. has expanded the long-term photometric database for AM CVn stars. In particular, their outburst properties are well correlated with orbital period and allow constraints to be placed on the secular mass transfer rate between secondary and primary if one adopts the disk instability model for the outbursts. We use the observed range of outbursting behavior for AM CVn systems as a function of orbital period to place a constraint on mass transfer rate versus orbital period. We infer a rate approximately 5 x 10(exp -9) solar mass yr(exp -1) ((P(sub orb)/1000 s)(exp -5.2)). We show that the functional form so obtained is consistent with the recurrence time-orbital period relation found by Levitan et al. using a simple theory for the recurrence time. Also, we predict that their steep dependence of outburst duration on orbital period will flatten considerably once the longer orbital period systems have more complete observations.
Exploring Constrained Creative Communication
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk
2017-01-01
Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...
Yang, E. G.; Kort, E. A.; Ware, J.; Ye, X.; Lauvaux, T.; Wu, D.; Lin, J. C.; Oda, T.
2017-12-01
Anthropogenic carbon dioxide (CO2) emissions are greatly perturbing the Earth's carbon cycle. Rising emissions from the developing world are increasing uncertainties in global CO2 emissions. With the rapid urbanization of developing regions, methods of constraining urban CO2 emissions in these areas can address critical uncertainties in the global carbon budget. In this study, we work toward constraining urban CO2 emissions in the Middle East by comparing top-down observations and bottom-up simulations of total column CO2 (XCO2) in four cities (Riyadh, Cairo, Baghdad, and Doha), both separately and in aggregate. This comparison involves quantifying the relationship for all available data in the period of September 2014 until March 2016 between observations of XCO2 from the Orbiting Carbon Observatory-2 (OCO-2) satellite and simulations of XCO2 using the Stochastic Time-Inverted Lagrangian Transport (STILT) model coupled with Global Data Assimilation System (GDAS) reanalysis products and multiple CO2 emissions inventories. We discuss the extent to which our observation/model framework can distinguish between the different emissions representations and determine optimized emissions estimates for this domain. We also highlight the implications of our comparisons on the fidelity of the bottom-up inventories used, and how these implications may inform the use of OCO-2 data for urban regions around the world.
DETAILED ABUNDANCES OF THE SOLAR TWINS 16 CYGNI A AND B: CONSTRAINING PLANET FORMATION MODELS
International Nuclear Information System (INIS)
Schuler, Simon C.; Cunha, Katia; Smith, Verne V.; Ghezzi, Luan; King, Jeremy R.; Deliyannis, Constantine P.; Boesgaard, Ann Merchant
2011-01-01
Results of a detailed abundance analysis of the solar twins 16 Cyg A and 16 Cyg B based on high-resolution, high signal-to-noise ratio echelle spectroscopy are presented. 16 Cyg B is known to host a giant planet while no planets have yet been detected around 16 Cyg A. Stellar parameters are derived directly from our high-quality spectra, and the stars are found to be physically similar, with ΔT eff = +43 K, Δlog g = -0.02 dex, and Δξ = +0.10 km s -1 (in the sense of A - B), consistent with previous findings. Abundances of 15 elements are derived and are found to be indistinguishable between the two stars. The abundances of each element differ by ≤0.026 dex, and the mean difference is +0.003 ± 0.015 (σ) dex. Aside from Li, which has been previously shown to be depleted by a factor of at least 4.5 in 16 Cyg B relative to 16 Cyg A, the two stars appear to be chemically identical. The abundances of each star demonstrate a positive correlation with the condensation temperature of the elements (T c ); the slopes of the trends are also indistinguishable. In accordance with recent suggestions, the positive slopes of the [m/H]-T c relations may imply that terrestrial planets have not formed around either 16 Cyg A or 16 Cyg B. The physical characteristics of the 16 Cyg system are discussed in terms of planet formation models, and plausible mechanisms that can account for the lack of detected planets around 16 Cyg A, the disparate Li abundances of 16 Cyg A and B, and the eccentricity of the planet 16 Cyg B b are suggested.
Murphy, E. M.; Schramke, J. A.
1998-11-01
Changes in geochemistry and stable isotopes along a well-established groundwater flow path were used to estimate in situ microbial respiration rates in the Middendorf aquifer in the southeastern United States. Respiration rates were determined for individual terminal electron acceptors including O 2, MnO 2, Fe 3+, and SO 42-. The extent of biotic reactions were constrained by the fractionation of stable isotopes of carbon and sulfur. Sulfur isotopes and the presence of sulfur-oxidizing microorganisms indicated that sulfate is produced through the oxidation of reduced sulfur species in the aquifer and not by the dissolution of gypsum, as previously reported. The respiration rates varied along the flow path as the groundwater transitioned between primarily oxic to anoxic conditions. Iron-reducing microorganisms were the largest contributors to the oxidation of organic matter along the portion of the groundwater flow path investigated in this study. The transition zone between oxic and anoxic groundwater contained a wide range of terminal electron acceptors and showed the greatest diversity and numbers of culturable microorganisms and the highest respiration rates. A comparison of respiration rates measured from core samples and pumped groundwater suggests that variability in respiration rates may often reflect the measurement scales, both in the sample volume and the time-frame over which the respiration measurement is averaged. Chemical heterogeneity may create a wide range of respiration rates when the scale of the observation is below the scale of the heterogeneity.
Model documentation report: Short-Term Hydroelectric Generation Model
International Nuclear Information System (INIS)
1993-08-01
The purpose of this report is to define the objectives of the Short- Term Hydroelectric Generation Model (STHGM), describe its basic approach, and to provide details on the model structure. This report is intended as a reference document for model analysts, users, and the general public. Documentation of the model is in accordance with the Energy Information Administration's (AYE) legal obligation to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). The STHGM performs a short-term (18 to 27- month) forecast of hydroelectric generation in the United States using an autoregressive integrated moving average (UREMIA) time series model with precipitation as an explanatory variable. The model results are used as input for the short-term Energy Outlook
Directory of Open Access Journals (Sweden)
H. Kettle
2009-08-01
Full Text Available Biogeochemical models of the ocean carbon cycle are frequently validated by, or tuned to, satellite chlorophyll data. However, ocean carbon cycle models are required to accurately model the movement of carbon, not chlorophyll, and due to the high variability of the carbon to chlorophyll ratio in phytoplankton, chlorophyll is not a robust proxy for carbon. Using inherent optical property (IOP inversion algorithms it is now possible to also derive the amount of light backscattered by the upper ocean (b_{b} which is related to the amount of particulate organic carbon (POC present. Using empirical relationships between POC and b_{b}, a 1-D marine biogeochemical model is used to simulate b_{b} at 490 nm thereby allowing the model to be compared with both remotely-sensed chlorophyll or b_{b} data. Here I investigate the possibility of using b_{b} in conjunction with chlorophyll data to help constrain the parameters in a simple 1-D NPZD model. The parameters of the biogeochemical model are tuned with a genetic algorithm, so that the model is fitted to either chlorophyll data or to both chlorophyll and b_{b} data at three sites in the Atlantic with very different characteristics. Several inherent optical property (IOP algorithms are available for estimating b_{b}, three of which are used here. The effect of the different b_{b} datasets on the behaviour of the tuned model is examined to ascertain whether the uncertainty in b_{b} is significant. The results show that the addition of b_{b} data does not consistently alter the same model parameters at each site and in fact can lead to some parameters becoming less well constrained, implying there is still much work to be done on the mechanisms relating chlorophyll to POC and b_{b} within the model. However, this study does indicate that
International Nuclear Information System (INIS)
Beretta, Gian Paolo
2006-01-01
We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e i with i=1,2,...,N. The time-dependent occupation probabilities p i (t) are assumed to obey the nonlinear rate equations τ dp i /dt=-p i ln p i -α(t)p i -β(t)e i p i where α(t) and β(t) are functionals of the p i (t)'s that maintain invariant the mean energy E=Σ i=1 N e i p i (t) and the normalization condition 1=Σ i=1 N p i (t). The entropy S(t)=-k B Σ i=1 N p i (t)ln p i (t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p i (t) of the rate equations are unique and well defined for arbitrary initial conditions p i (0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p i 's but of their positive square roots √(p i ), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72 (2002)]. We also interpret the functionals k B α(t) and k B β(t) as nonequilibrium generalizations of the thermodynamic-equilibrium Massieu
Evaluating transit operator efficiency: An enhanced DEA model with constrained fuzzy-AHP cones
Xin Li; Yue Liu; Yaojun Wang; Zhigang Gao
2016-01-01
This study addresses efforts to comb the Analytic Hierarchy Process (AHP) with Data Envelopment Analysis (DEA) to deliver a robust enhanced DEA model for transit operator efficiency assessment. The proposed model is designed to better capture inherent preferences information over input and output indicators by adding constraint cones to the conventional DEA model. A revised fuzzy-AHP model is employed to generate cones, where the proposed model features the integration of the fuzzy logic with...
Directory of Open Access Journals (Sweden)
Zongsheng Zheng
2015-01-01
Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.
CONSTRAINING THE GRB-MAGNETAR MODEL BY MEANS OF THE GALACTIC PULSAR POPULATION
Energy Technology Data Exchange (ETDEWEB)
Rea, N. [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam (Netherlands); Gullón, M.; Pons, J. A.; Miralles, J. A. [Departament de Fisica Aplicada, Universitat d’Alacant, Ap. Correus 99, E-03080 Alacant (Spain); Perna, R. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Dainotti, M. G. [Physics Department, Stanford University, Via Pueblo Mall 382, Stanford, CA (United States); Torres, D. F. [Instituto de Ciencias de l’Espacio (ICE, CSIC-IEEC), Campus UAB, Carrer Can Magrans s/n, E-08193 Barcelona (Spain)
2015-11-10
A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <10{sup 5} s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between 2005 January and 2014 August. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type or rate at z = 0). Direct consequences would be that we should allow the existence of magnetars and “super-magnetars” having different progenitors, and that Type Ib/c SNe related to Long GRBs form systematically neutron stars with higher initial magnetic fields. We put an upper limit of ≤16 “super-magnetars” formed by a GRB in our Galaxy in the past Myr (at 99% c.l.). This limit is somewhat smaller than what is roughly expected from Long GRB rates, although the very large uncertainties do not allow us to draw strong conclusion in this respect.
Sparse model selection via integral terms
Schaeffer, Hayden; McCalla, Scott G.
2017-08-01
Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations. In this work, we develop a learning approach for the selection and identification of a dynamical system directly from noisy data. The learning is performed by extracting a small subset of important features from an overdetermined set of possible features using a nonconvex sparse regression model. The sparse regression model is constructed to fit the noisy data to the trajectory of the dynamical system while using the smallest number of active terms. Computational experiments detail the model's stability, robustness to noise, and recovery accuracy. Examples include nonlinear equations, population dynamics, chaotic systems, and fast-slow systems.
Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.
2014-01-01
A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.
Minimal constrained supergravity
Directory of Open Access Journals (Sweden)
N. Cribiori
2017-01-01
Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
2013-09-30
picture of ENSO-driven autoregressive models for North Pacific SST variability, providing evidence that intermittent processes, such as variability of...intermittent aspects (i) and (ii) are achieved by developing a simple stochastic parameterization for the unresolved details of synoptic -scale...stochastic parameterization of synoptic scale activity to build a stochastic skeleton model for the MJO; this is the first low order model of the MJO which
A comparison of flood extent modelling approaches through constraining uncertainties on gauge data
Directory of Open Access Journals (Sweden)
M. G. F. Werner
2004-01-01
Full Text Available A comparison is made of 1D, 2D and integrated 1D-2D hydraulic models in predicting flood stages in a 17 km reach of the River Saar in Germany. The models perform comparably when calibrated against limited data available from a single gauge in the reach for three low to medium flood events. In validation against a larger event than those used in calibration, extrapolation with the 1D and particularly the integrated 1D-2D model is reliable, if uncertain, while the 2D model is unreliable. The difference stems from the way in which the models deal with flow in the main channel and in the floodplain and with turbulent momentum interchange between the two domains. The importance of using spatial calibration data for testing models giving spatial predictions is shown. Even simple binary (eye-witness observations on the presence or absence of flooding in establishing a reliable model structure to predict flood extent can be very valuable. Keywords: floods, hydraulic modelling, model calibration, uncertainty analysis
Shafii, Mahyar; Basu, Nandita; Craig, James R.; Schiff, Sherry L.; Van Cappellen, Philippe
2017-04-01
Hydrologic models are often tasked with replicating historical hydrographs but may do so without accurately reproducing the internal hydrological functioning of the watershed, including the flow partitioning, which is critical for predicting solute movement through the catchment. Here we propose a novel partitioning-focused calibration technique that utilizes flow-partitioning coefficients developed based on the pioneering work of L'vovich (1979). Our hypothesis is that inclusion of the L'vovich partitioning relations in calibration increases model consistency and parameter identifiability and leads to superior model performance with respect to flow partitioning than using traditional hydrological signatures (e.g., flow duration curve indices) alone. The L'vovich approach partitions the annual precipitation into four components (quick flow, soil wetting, slow flow, and evapotranspiration) and has been shown to work across a range of climatic and landscape settings. A new diagnostic multicriteria model calibration methodology is proposed that first quantifies four calibration measures for watershed functions based on the L'vovich theory, and then utilizes them as calibration criteria. The proposed approach is compared with a traditional hydrologic signature-based calibration for two conceptual bucket models. Results reveal that the proposed approach not only improves flow partitioning in the model compared to signature-based calibration but is also capable of diagnosing flow-partitioning inaccuracy and suggesting relevant model improvements. Furthermore, the proposed partitioning-based calibration approach is shown to increase parameter identifiability. This model calibration approach can be readily applied to other models.
Gray, William G; Miller, Cass T
2010-12-01
This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model.
Induced (N,0) supergravity as a constrained Osp(N|2) WZWN model and its effective action
Delius, G W; van Nieuwenhuizen, P
1993-01-01
A chiral $(N,0) $ supergravity theory in d=2 dimensions for any $N$ and its induced action can be obtained by constraining the currents of an Osp(N$|$2) WZWN model. The underlying symmetry algebras are the nonlinear SO(N) superconformal algebras of Knizhnik and Bershadsky. The case $N=3$ is worked out in detail. We show that by adding quantum corrections to the classical transformation rules, the gauge algebra on gauge fields and currents closes. Integrability conditions on Ward identities are derived. The effective action is computed at one loop. It is finite, and can be obtained from the induced action by rescaling the central charge and fields by finite Z factors.
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
How to constrain multi-objective calibrations of the SWAT model using water balance components
Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...
Uncertainty and the Social Cost of Methane Using Bayesian Constrained Climate Models
Errickson, F. C.; Anthoff, D.; Keller, K.
2016-12-01
Social cost estimates of greenhouse gases are important for the design of sound climate policies and are also plagued by uncertainty. One major source of uncertainty stems from the simplified representation of the climate system used in the integrated assessment models that provide these social cost estimates. We explore how uncertainty over the social cost of methane varies with the way physical processes and feedbacks in the methane cycle are modeled by (i) coupling three different methane models to a simple climate model, (ii) using MCMC to perform a Bayesian calibration of the three coupled climate models that simulates direct sampling from the joint posterior probability density function (pdf) of model parameters, and (iii) producing probabilistic climate projections that are then used to calculate the Social Cost of Methane (SCM) with the DICE and FUND integrated assessment models. We find that including a temperature feedback in the methane cycle acts as an additional constraint during the calibration process and results in a correlation between the tropospheric lifetime of methane and several climate model parameters. This correlation is not seen in the models lacking this feedback. Several of the estimated marginal pdfs of the model parameters also exhibit different distributional shapes and expected values depending on the methane model used. As a result, probabilistic projections of the climate system out to the year 2300 exhibit different levels of uncertainty and magnitudes of warming for each of the three models under an RCP8.5 scenario. We find these differences in climate projections result in differences in the distributions and expected values for our estimates of the SCM. We also examine uncertainty about the SCM by performing a Monte Carlo analysis using a distribution for the climate sensitivity while holding all other climate model parameters constant. Our SCM estimates using the Bayesian calibration are lower and exhibit less uncertainty
Gallart, X; Gomez, J C; Fernández-Valencia, J A; Combalía, A; Bori, G; García, S; Rios, J; Riba, J
2014-01-01
To evaluate the short-term results of an ultra high molecular weight polyethylene retentive cup in patients at high risk of dislocation, either primary or revision surgery. Retrospective review of 38 cases in order to determine the rate of survival and failure analysis of a constrained cemented cup, with a mean follow-up of 27 months. We studied demographic data, complications, especially re-dislocations of the prosthesis and, also the likely causes of system failure analyzed. In 21.05% (8 cases) were primary surgery and 78.95% were revision surgery (30 cases). The overall survival rate by Kaplan-Meier method was 70.7 months. During follow-up 3 patients died due to causes unrelated to surgery and 2 infections occurred. 12 hips had at least two previous surgeries done. It wasn't any case of aseptic loosening. Four patients presented dislocation, all with a 22 mm head (P=.008). Our statistical analysis didn't found relationship between the abduction cup angle and implant failure (P=.22). The ultra high molecular weight polyethylene retentive cup evaluated in this series has provided satisfactory short-term results in hip arthroplasty patients at high risk of dislocation. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.
Jones, Emlyn M.; Baird, Mark E.; Mongin, Mathieu; Parslow, John; Skerratt, Jenny; Lovell, Jenny; Margvelashvili, Nugzar; Matear, Richard J.; Wild-Allen, Karen; Robson, Barbara; Rizwi, Farhan; Oke, Peter; King, Edward; Schroeder, Thomas; Steven, Andy; Taylor, John
2016-12-01
Skillful marine biogeochemical (BGC) models are required to understand a range of coastal and global phenomena such as changes in nitrogen and carbon cycles. The refinement of BGC models through the assimilation of variables calculated from observed in-water inherent optical properties (IOPs), such as phytoplankton absorption, is problematic. Empirically derived relationships between IOPs and variables such as chlorophyll-a concentration (Chl a), total suspended solids (TSS) and coloured dissolved organic matter (CDOM) have been shown to have errors that can exceed 100 % of the observed quantity. These errors are greatest in shallow coastal regions, such as the Great Barrier Reef (GBR), due to the additional signal from bottom reflectance. Rather than assimilate quantities calculated using IOP algorithms, this study demonstrates the advantages of assimilating quantities calculated directly from the less error-prone satellite remote-sensing reflectance (RSR). To assimilate the observed RSR, we use an in-water optical model to produce an equivalent simulated RSR and calculate the mismatch between the observed and simulated quantities to constrain the BGC model with a deterministic ensemble Kalman filter (DEnKF). The traditional assumption that simulated surface Chl a is equivalent to the remotely sensed OC3M estimate of Chl a resulted in a forecast error of approximately 75 %. We show this error can be halved by instead using simulated RSR to constrain the model via the assimilation system. When the analysis and forecast fields from the RSR-based assimilation system are compared with the non-assimilating model, a comparison against independent in situ observations of Chl a, TSS and dissolved inorganic nutrients (NO3, NH4 and DIP) showed that errors are reduced by up to 90 %. In all cases, the assimilation system improves the simulation compared to the non-assimilating model. Our approach allows for the incorporation of vast quantities of remote-sensing observations
Tran, Kenneth
2010-01-01
We present a metabolically regulated model of cardiac active force generation with which we investigate the effects of ischemia on maximum force production. Our model, based on a model of cross-bridge kinetics that was developed by others, reproduces many of the observed effects of MgATP, MgADP, Pi, and H(+) on force development while retaining the force/length/Ca(2+) properties of the original model. We introduce three new parameters to account for the competitive binding of H(+) to the Ca(2+) binding site on troponin C and the binding of MgADP within the cross-bridge cycle. These parameters, along with the Pi and H(+) regulatory steps within the cross-bridge cycle, were constrained using data from the literature and validated using a range of metabolic and sinusoidal length perturbation protocols. The placement of the MgADP binding step between two strongly-bound and force-generating states leads to the emergence of an unexpected effect on the force-MgADP curve, where the trend of the relationship (positive or negative) depends on the concentrations of the other metabolites and [H(+)]. The model is used to investigate the sensitivity of maximum force production to changes in metabolite concentrations during the development of ischemia.
Directory of Open Access Journals (Sweden)
Olav Slupphaug
1999-07-01
Full Text Available In this paper a method for nonlinear robust stabilization based on solving a bilinear matrix inequality (BMI feasibility problem is developed. Robustness against model uncertainty is handled. In different non-overlapping regions of the state-space called clusters the plant is assumed to be an element in a polytope which vertices (local models are affine systems. In the clusters containing the origin in their closure, the local models are restricted to be linear systems. The clusters cover the region of interest in the state-space. An affine state-feedback is associated with each cluster. By utilizing the affinity of the local models and the state-feedback, a set of linear matrix inequalities (LMIs combined with a single nonconvex BMI are obtained which, if feasible, guarantee quadratic stability of the origin of the closed-loop. The feasibility problem is attacked by a branch-and-bound based global approach. If the feasibility check is successful, the Liapunov matrix and the piecewise affine state-feedback are given directly by the feasible solution. Control constraints are shown to be representable by LMIs or BMIs, and an application of the control design method to robustify constrained nonlinear model predictive control is presented. Also, the control design method is applied to a simple example.
International Nuclear Information System (INIS)
Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.
2014-01-01
We present an application of a statistical tool known as sensitivity analysis to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological N-body simulations. We show how a sensitivity analysis can be performed on our chemo-dynamical model, ChemTreeN, to characterize and quantify its relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs (i.e., observational predictions) of interest. Conversely, sensitivity analysis allows us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated with the Milky Way, such as the luminosity function of the dwarf satellites. The results from the sensitivity analysis are used to train specific model emulators of ChemTreeN, only involving the most relevant input parameters. This allowed us to efficiently explore the input parameter space. A statistical comparison of model outputs and real observables is used to obtain a 'best-fitting' parameter set. We consider different Milky-Way-like dark matter halos to account for the dependence of the best-fitting parameter selection process on the underlying merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one of the resulting stellar halo models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis, it is possible to disregard certain models, and their
Evaluating transit operator efficiency: An enhanced DEA model with constrained fuzzy-AHP cones
Directory of Open Access Journals (Sweden)
Xin Li
2016-06-01
Full Text Available This study addresses efforts to comb the Analytic Hierarchy Process (AHP with Data Envelopment Analysis (DEA to deliver a robust enhanced DEA model for transit operator efficiency assessment. The proposed model is designed to better capture inherent preferences information over input and output indicators by adding constraint cones to the conventional DEA model. A revised fuzzy-AHP model is employed to generate cones, where the proposed model features the integration of the fuzzy logic with a hierarchical AHP structure to: 1 normalize the scales of different evaluation indicators, 2 construct the matrix of pair-wise comparisons with fuzzy set, and 3 optimize the weight of each criterion with a non-linear programming model. With introduction of cone-based constraints, the new system offers accounting advantages in the interaction among indicators when evaluating the performance of transit operators. To illustrate the applicability of the proposed approach, a real case in Nanjing City, the capital of China's Jiangsu Province, has been selected to assess the efficiencies of seven bus companies based on 2009 and 2010 datasets. A comparison between conventional DEA and enhanced DEA was also conducted to clarify the new system's superiority. Results reveal that the proposed model is more applicable in evaluating transit operator's efficiency thus encouraging a boarder range of applications.
Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Farias, Ricardo L.S. [Universidade Federal de Santa Maria, Departamento de Fisica, Santa Maria, RS (Brazil); Kent State University, Physics Department, Kent, OH (United States); Timoteo, Varese S. [Universidade Estadual de Campinas (UNICAMP), Grupo de Optica e Modelagem Numerica (GOMNI), Faculdade de Tecnologia, Limeira, SP (Brazil); Avancini, Sidney S.; Pinto, Marcus B. [Universidade Federal de Santa Catarina, Departamento de Fisica, Florianopolis, Santa Catarina (Brazil); Krein, Gastao [Universidade Estadual Paulista, Instituto de Fisica Teorica, Sao Paulo, SP (Brazil)
2017-05-15
The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B, T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with G(B, T) are compared with the ones obtained at constant coupling, G. The model with G(B, T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic-field-dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the coupling constant can be easily implemented to improve typical model applications to magnetized quark matter. (orig.)
Takács, Gergely
2012-01-01
Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: · the implementation of ...
Constraining neutrinoless double beta decay
International Nuclear Information System (INIS)
Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.
2012-01-01
A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.
Directory of Open Access Journals (Sweden)
Christopher J. Somes
2017-05-01
Full Text Available Nitrogen is a key limiting nutrient that influences marine productivity and carbon sequestration in the ocean via the biological pump. In this study, we present the first estimates of nitrogen cycling in a coupled 3D ocean-biogeochemistry-isotope model forced with realistic boundary conditions from the Last Glacial Maximum (LGM ~21,000 years before present constrained by nitrogen isotopes. The model predicts a large decrease in nitrogen loss rates due to higher oxygen concentrations in the thermocline and sea level drop, and, as a response, reduced nitrogen fixation. Model experiments are performed to evaluate effects of hypothesized increases of atmospheric iron fluxes and oceanic phosphorus inventory relative to present-day conditions. Enhanced atmospheric iron deposition, which is required to reproduce observations, fuels export production in the Southern Ocean causing increased deep ocean nutrient storage. This reduces transport of preformed nutrients to the tropics via mode waters, thereby decreasing productivity, oxygen deficient zones, and water column N-loss there. A larger global phosphorus inventory up to 15% cannot be excluded from the currently available nitrogen isotope data. It stimulates additional nitrogen fixation that increases the global oceanic nitrogen inventory, productivity, and water column N-loss. Among our sensitivity simulations, the best agreements with nitrogen isotope data from LGM sediments indicate that water column and sedimentary N-loss were reduced by 17–62% and 35–69%, respectively, relative to preindustrial values. Our model demonstrates that multiple processes alter the nitrogen isotopic signal in most locations, which creates large uncertainties when quantitatively constraining individual nitrogen cycling processes. One key uncertainty is nitrogen fixation, which decreases by 25–65% in the model during the LGM mainly in response to reduced N-loss, due to the lack of observations in the open ocean most
Constraining snowmelt in a temperature-index model using simulated snow densities
Bormann, Kathryn J.
2014-09-01
Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the
Constraining the uncertainty in emissions over India with a regional air quality model evaluation
Karambelas, Alexandra; Holloway, Tracey; Kiesewetter, Gregor; Heyes, Chris
2018-02-01
To evaluate uncertainty in the spatial distribution of air emissions over India, we compare satellite and surface observations with simulations from the U.S. Environmental Protection Agency (EPA) Community Multi-Scale Air Quality (CMAQ) model. Seasonally representative simulations were completed for January, April, July, and October 2010 at 36 km × 36 km using anthropogenic emissions from the Greenhouse Gas-Air Pollution Interaction and Synergies (GAINS) model following version 5a of the Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants project (ECLIPSE v5a). We use both tropospheric columns from the Ozone Monitoring Instrument (OMI) and surface observations from the Central Pollution Control Board (CPCB) to closely examine modeled nitrogen dioxide (NO2) biases in urban and rural regions across India. Spatial average evaluation with satellite retrievals indicate a low bias in the modeled tropospheric column (-63.3%), which reflects broad low-biases in majority non-urban regions (-70.1% in rural areas) across the sub-continent to slightly lesser low biases reflected in semi-urban areas (-44.7%), with the threshold between semi-urban and rural defined as 400 people per km2. In contrast, modeled surface NO2 concentrations exhibit a slight high bias of +15.6% when compared to surface CPCB observations predominantly located in urban areas. Conversely, in examining extremely population dense urban regions with more than 5000 people per km2 (dense-urban), we find model overestimates in both the column (+57.8) and at the surface (+131.2%) compared to observations. Based on these results, we find that existing emission fields for India may overestimate urban emissions in densely populated regions and underestimate rural emissions. However, if we rely on model evaluation with predominantly urban surface observations from the CPCB, comparisons reflect model high biases, contradictory to the knowledge gained using satellite observations. Satellites thus
Modelling Near-IR polarization to constrain stellar wind bow shocks
Neilson, Hilding R.; Ignace, R.; Shrestha, M.; Hoffman, J. L.; Mackey, J.
2013-06-01
Bow shocks formed from stellar winds are common phenomena observed about massive and intermediate-mass stars such as zeta Oph, Betelgeuse and delta Cep. These bow shocks provide information about the motion of the star, the stellar wind properties and the density of the ISM. Because bow shocks are asymmetric structures, they also present polarized light that is a function of their shape and density. We present a preliminary work modeling dust polarization from a Wilkin (1996) analytic bow shock model and explore how the polarization changes as a function of stellar wind properties.
Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine
2017-04-01
In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well
Lund, M. T.; Samset, B. H.; Skeie, R. B.; Berntsen, T.
2017-12-01
Several recent studies have used observations from the HIPPO flight campaigns to constrain the modeled vertical distribution of black carbon (BC) over the Pacific. Results indicate a relatively linear relationship between global-mean atmospheric BC residence time, or lifetime, and bias in current models. A lifetime of less than 5 days is necessary for models to reasonably reproduce these observations. This is shorter than what many global models predict, which will in turn affect their estimates of BC climate impacts. Here we use the chemistry-transport model OsloCTM to examine whether this relationship between global BC lifetime and model skill also holds for a broader a set of flight campaigns from 2009-2013 covering both remote marine and continental regions at a range of latitudes. We perform four sets of simulations with varying scavenging efficiency to obtain a spread in the modeled global BC lifetime and calculate the model error and bias for each campaign and region. Vertical BC profiles are constructed using an online flight simulator, as well by averaging and interpolating monthly mean model output, allowing us to quantify sampling errors arising when measurements are compared with model output at different spatial and temporal resolutions. Using the OsloCTM coupled with a microphysical aerosol parameterization, we investigate the sensitivity of modeled BC vertical distribution to uncertainties in the aerosol aging and scavenging processes in more detail. From this, we can quantify how model uncertainties in the BC life cycle propagate into uncertainties in its climate impacts. For most campaigns and regions, a short global-mean BC lifetime corresponds with the lowest model error and bias. On an aggregated level, sampling errors appear to be small, but larger differences are seen in individual regions. However, we also find that model-measurement discrepancies in BC vertical profiles cannot be uniquely attributed to uncertainties in a single process or
Electron-capture Isotopes Could Constrain Cosmic-Ray Propagation Models
Benyamin, David; Shaviv, Nir J.; Piran, Tsvi
2017-12-01
Electron capture (EC) isotopes are known to provide constraints on the low-energy behavior of cosmic rays (CRs), such as reacceleration. Here, we study the EC isotopes within the framework of the dynamic spiral-arms CR propagation model in which most of the CR sources reside in the galactic spiral arms. The model was previously used to explain the B/C and sub-Fe/Fe ratios. We show that the known inconsistency between the 49Ti/49V and 51V/51Cr ratios remains also in the spiral-arms model. On the other hand, unlike the general wisdom that says the isotope ratios depend primarily on reacceleration, we find here that the ratio also depends on the halo size (Z h) and, in spiral-arms models, also on the time since the last spiral-arm passage ({τ }{arm}). Namely, EC isotopes can, in principle, provide interesting constraints on the diffusion geometry. However, with the present uncertainties in the lab measurements of both the electron attachment rate and the fragmentation cross sections, no meaningful constraint can be placed.
DEFF Research Database (Denmark)
Kaplan, Sigal; Prato, Carlo Giacomo
2010-01-01
A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...... constraints are related to travellers’ socio-economic characteristics and that path choice is related to minimizing time and avoiding congestion....
Wagener, Thorsten
2017-04-01
We increasingly build and apply hydrologic models that simulate systems beyond the catchment scale. Such models run at regional, national or even continental scales. They therefore offer opportunities for new scientific insights, for example by enabling comparative hydrology or connectivity studies, and for water management, where we might better understand changes to water resources from larger scale activities like agriculture or from hazards such as droughts. However, these models also require us to rethink how we build and evaluate them given that some of the unsolved problems from the catchment scale have not gone away. So what role should such models play in scientific advancement in hydrology? What problems do we still have to resolve before they can fulfill their role? What opportunities for solving these problems are there, but have not yet been utilized? I will provide some thoughts on these issues in the context of the IAHS Panta Rhei initiative and the scientific challenges it has set out for hydrology (Montanari et al., 2013, Hydrological Sciences Journal; McMillan et al., 2016, Hydrological Sciences Journal).
Physics Constrained Stochastic-Statistical Models for Extended Range Environmental Prediction
2014-09-30
incorporated: multiple variables (wind, geopotential height, water vapor , and, as a proxy for convective activity, outgoing longwave radiation); multiple...a possible limitation in the water vapor formulation that will require further attention in the future. Finally, a simple interpretation is given...observational data and climate model data (Stechmann, Majda). 5. Identification of outgoing longwave radiation (OLR) satellite data as a measure of
The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...
Khalil, K.; Rabouille, C.; Gallinari, M.; Soetaert, K.E.R.; DeMaster, D.J.; Ragueneau, O.
2007-01-01
The processes controlling preservation and recycling of particulate biogenic silica in sediments must be understood in order to calculate oceanic silica mass balances. The new contribution of this work is the coupled use of advanced models including reprecipitation and different phases of biogenic
Effects of time-varying β in SNLS3 on constraining interacting dark energy models
International Nuclear Information System (INIS)
Wang, Shuang; Wang, Yong-Zhen; Geng, Jia-Jia; Zhang, Xin
2014-01-01
It has been found that, for the Supernova Legacy Survey three-year (SNLS3) data, there is strong evidence for the redshift evolution of the color-luminosity parameter β. In this paper, adopting the w-cold-dark-matter (wCDM) model and considering its interacting extensions (with three kinds of interaction between dark sectors), we explore the evolution of β and its effects on parameter estimation. In addition to the SNLS3 data, we also use the latest Planck distance priors data, the galaxy clustering data extracted from sloan digital sky survey data release 7 and baryon oscillation spectroscopic survey, as well as the direct measurement of Hubble constant H 0 from the Hubble Space Telescope observation. We find that, for all the interacting dark energy (IDE) models, adding a parameter of β can reduce χ 2 by ∝34, indicating that a constant β is ruled out at 5.8σ confidence level. Furthermore, it is found that varying β can significantly change the fitting results of various cosmological parameters: for all the dark energy models considered in this paper, varying β yields a larger fractional CDM densities Ω c0 and a larger equation of state w; on the other side, varying β yields a smaller reduced Hubble constant h for the wCDM model, but it has no impact on h for the three IDE models. This implies that there is a degeneracy between h and coupling parameter γ. Our work shows that the evolution of β is insensitive to the interaction between dark sectors, and then highlights the importance of considering β's evolution in the cosmology fits. (orig.)
Carozza, David A; Bianchi, Daniele; Galbraith, Eric D
2017-01-01
Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change.
A hydrodynamical model of Kepler's supernova remnant constrained by x-ray spectra
International Nuclear Information System (INIS)
Ballet, J.; Arnaud, M.; Rothinfluo, R.; Chieze, J.P.; Magne, B.
1988-01-01
The remnant of the historical supernova observed by Kepler in 1604 was recently observed in x-rays by the EXOSAT satellite up to 10 keV. A strong Fe K emission line around 6.5 keV is readily apparent in the spectrum. From an analysis of the light curve of the SN, reconstructed from historical descriptions, a previous study proposed to classify it as type I. Standard models of SN I based on carbon deflagration of white dwarf predict the synthesis of about 0.5 M circle of iron in the ejecta. Observing the iron line is a crucial check for such models. It has been argued that the light curve of Sn II-L is very similar to that of SN I and that the original observations are compatible with either type. In view of this uncertainty the authors have run a hydrodynamics-ionization code for both SN II and SN I remnants
Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta
Directory of Open Access Journals (Sweden)
J. H. Nienhuis
2017-09-01
Full Text Available The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal dynamics or allogenic (external forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.
Kelly, N. M.; Marchi, S.; Mojzsis, S. J.; Flowers, R. M.; Metcalf, J. R.; Bottke, W. F., Jr.
2017-12-01
Impacts have a significant physical and chemical influence on the surface conditions of a planet. The cratering record is used to understand a wide array of impact processes, such as the evolution of the impact flux through time. However, the relationship between impactor size and a resulting impact crater remains controversial (e.g., Bottke et al., 2016). Likewise, small variations in the impact velocity are known to significantly affect the thermal-mechanical disturbances in the aftermath of a collision. Development of more robust numerical models for impact cratering has implications for how we evaluate the disruptive capabilities of impact events, including the extent and duration of thermal anomalies, the volume of ejected material, and the resulting landscape of impacted environments. To address uncertainties in crater scaling relationships, we present an approach and methodology that integrates numerical modeling of the thermal evolution of terrestrial impact craters with low-temperature, (U-Th)/He thermochronometry. The approach uses time-temperature (t-T) paths of crust within an impact crater, generated from numerical simulations of an impact. These t-T paths are then used in forward models to predict the resetting behavior of (U-Th)/He ages in the mineral chronometers apatite and zircon. Differences between the predicted and measured (U-Th)/He ages from a modeled terrestrial impact crater can then be used to evaluate parameters in the original numerical simulations, and refine the crater scaling relationships. We expect our methodology to additionally inform our interpretation of impact products, such as lunar impact breccias and meteorites, providing robust constraints on their thermal histories. In addition, the method is ideal for sample return mission planning - robust "prediction" of ages we expect from a given impact environment enhances our ability to target sampling sites on the Moon, Mars or other solar system bodies where impacts have strongly
International Nuclear Information System (INIS)
Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid
2016-01-01
Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.
Model predictive control of constrained with non linear stochastic parameters systems
Dombrovskii, V.; Obyedko, T.
2011-01-01
In this paper we consider the model predictive control problem of discrete-time systems with non-linear random depended parameters for which only the first and second conditional distribution moments, the conditional autocorrelations and the mutual cross-correlations are known. The open-loop feedback control strategy is derived subject to hard constraints on the control variables. The approach is advantageous because the rich arsenal of methods of non-linear estimation or the results of nonpa...
Merlis, Timothy M.
2014-10-01
Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.
Modeling and analysis of strategic forward contracting in transmission constrained power markets
Energy Technology Data Exchange (ETDEWEB)
Yu, C.W.; Chung, T.S. [Department of Electrical Engineering, The Hong Kong Polytechnic University, Hong Kong (China); Zhang, S.H.; Wang, X. [Department of Automation, Shanghai University, Shanghai 200072 (China)
2010-03-15
Taking the effects of transmission network into account, strategic forward contracting induced by the interaction of generation firms' strategies in the spot and forward markets is investigated. A two-stage game model is proposed to describe generation firms' strategic forward contracting and spot market competition. In the spot market, generation firms behave strategically by submitting bids at their nodes in a form of linear supply function (LSF) and there are arbitrageurs who buy and resell power at different nodes where price differences exceed the costs of transmission. The owner of the grid is assumed to ration limited transmission line capacity to maximize the value of the transmission services in the spot market. The Cournot-type competition is assumed for the strategic forward contract market. This two-stage model is formulated as an equilibrium problem with equilibrium constraints (EPEC); in which each firm's optimization problem in the forward market is a mathematical program with equilibrium constraints (MPEC) and parameter-dependent spot market equilibrium as the inner problem. A nonlinear complementarity method is employed to solve this EPEC model. (author)
Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran
Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa
2017-02-01
The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.
White, Jeremy T.; Karakhanian, Arkadi; Connor, Chuck; Connor, Laura; Hughes, Joseph D.; Malservisi, Rocco; Wetmore, Paul
2015-01-01
An appreciable challenge in volcanology and geothermal resource development is to understand the relationships between volcanic systems and low-enthalpy geothermal resources. The enthalpy of an undeveloped geothermal resource in the Karckar region of Armenia is investigated by coupling geophysical and hydrothermal modeling. The results of 3-dimensional inversion of gravity data provide key inputs into a hydrothermal circulation model of the system and associated hot springs, which is used to evaluate possible geothermal system configurations. Hydraulic and thermal properties are specified using maximum a priori estimates. Limited constraints provided by temperature data collected from an existing down-gradient borehole indicate that the geothermal system can most likely be classified as low-enthalpy and liquid dominated. We find the heat source for the system is likely cooling quartz monzonite intrusions in the shallow subsurface and that meteoric recharge in the pull-apart basin circulates to depth, rises along basin-bounding faults and discharges at the hot springs. While other combinations of subsurface properties and geothermal system configurations may fit the temperature distribution equally well, we demonstrate that the low-enthalpy system is reasonably explained based largely on interpretation of surface geophysical data and relatively simple models.
Modeling and Economic Analysis of Power Grid Operations in a Water Constrained System
Zhou, Z.; Xia, Y.; Veselka, T.; Yan, E.; Betrie, G.; Qiu, F.
2016-12-01
The power sector is the largest water user in the United States. Depending on the cooling technology employed at a facility, steam-electric power stations withdrawal and consume large amounts of water for each megawatt hour of electricity generated. The amounts are dependent on many factors, including ambient air and water temperatures, cooling technology, etc. Water demands from most economic sectors are typically highest during summertime. For most systems, this coincides with peak electricity demand and consequently a high demand for thermal power plant cooling water. Supplies however are sometimes limited due to seasonal precipitation fluctuations including sporadic droughts that lead to water scarcity. When this occurs there is an impact on both unit commitments and the real-time dispatch. In this work, we model the cooling efficiency of several different types of thermal power generation technologies as a function of power output level and daily temperature profiles. Unit specific relationships are then integrated in a power grid operational model that minimizes total grid production cost while reliably meeting hourly loads. Grid operation is subject to power plant physical constraints, transmission limitations, water availability and environmental constraints such as power plant water exit temperature limits. The model is applied to a standard IEEE-118 bus system under various water availability scenarios. Results show that water availability has a significant impact on power grid economics.
Zurek, Jeffrey; William-Jones, Glyn; Johnson, Dan; Eggers, Al
2012-10-01
Microgravity data were collected between 2002 and 2009 at the Three Sisters Volcanic Complex, Oregon, to investigate the causes of an ongoing deformation event west of South Sister volcano. Three different conceptual models have been proposed as the causal mechanism for the deformation event: (1) hydraulic uplift due to continual injection of magma at depth, (2) pressurization of hydrothermal systems and (3) viscoelastic response to an initial pressurization at depth. The gravitational effect of continual magma injection was modeled to be 20 to 33 μGal at the center of the deformation field with volumes based on previous deformation studies. The gravity time series, however, did not detect a mass increase suggesting that a viscoelactic response of the crust is the most likely cause for the deformation from 2002 to 2009. The crust, deeper than 3 km, in the Three Sisters region was modeled as a Maxwell viscoelastic material and the results suggest a dynamic viscosity between 1018 to 5 × 1019 Pa s. This low crustal viscosity suggests that magma emplacement or stall depth is controlled by density and not the brittle ductile transition zone. Furthermore, these crustal properties and the observed geochemical composition gaps at Three Sisters can be best explained by different melt sources and limited magma mixing rather than fractional crystallization. More generally, low intrusion rates, low crustal viscosity, and multiple melt sources could also explain the whole rock compositional gaps observed at other arc volcanoes.
Modeling and analysis of strategic forward contracting in transmission constrained power markets
International Nuclear Information System (INIS)
Yu, C.W.; Chung, T.S.; Zhang, S.H.; Wang, X.
2010-01-01
Taking the effects of transmission network into account, strategic forward contracting induced by the interaction of generation firms' strategies in the spot and forward markets is investigated. A two-stage game model is proposed to describe generation firms' strategic forward contracting and spot market competition. In the spot market, generation firms behave strategically by submitting bids at their nodes in a form of linear supply function (LSF) and there are arbitrageurs who buy and resell power at different nodes where price differences exceed the costs of transmission. The owner of the grid is assumed to ration limited transmission line capacity to maximize the value of the transmission services in the spot market. The Cournot-type competition is assumed for the strategic forward contract market. This two-stage model is formulated as an equilibrium problem with equilibrium constraints (EPEC); in which each firm's optimization problem in the forward market is a mathematical program with equilibrium constraints (MPEC) and parameter-dependent spot market equilibrium as the inner problem. A nonlinear complementarity method is employed to solve this EPEC model. (author)
Slack, W.; Murdoch, L.
2016-12-01
Hydraulic fractures can be created in shallow soil or bedrock to promote processes that destroy or remove chemical contaminants. The form of the fracture plays an important role in how it is used in such applications. We created more than 4500 environmental hydraulic fractures at approximately 300 sites since 1990, and we measured surface deformation at many. Several of these sites subsequently were excavated to evaluate fracture form in detail. In one recent example, six hydraulic fractures were created at 1.5m depth while we measured upward displacement and tilt at 15 overlying locations. We excavated in the vicinities of two of the fractures and mapped the exposed fractures. Tilt vectors were initially symmetric about the borehole but radiated from a point that moved southwest with time. Upward displacement of as much as 2.5 cm covered a region 5m to 6m across. The maximum displacement was roughly at the center of the deformed region but was 2m southwest of the borehole, consistent with the tilt data. Excavation revealed an oblong, proppant-filled fracture over 4.2 m in length with a maximum thickness of 1 cm, so the proppant covers a region that is smaller than the uplifted area and the proppant thickness is roughly half of the uplift. The fracture was shaped like a shallow saucer with maximum dips of approximately 15o at the southwestern end. The pattern of tilt and uplift generally reflect the aperture of the underlying pressurized fracture, but the deformation extends beyond the extent of the sand proppant so a quantitative interpretation requires inversion. Inversion of the tilt data using a simple double dislocation model under-estimates the extent but correctly predicts the depth, orientation, and off-centered location. Inversion of uplift using a model that assumes the overburden deforms like a plate over-estimates the extent. Neither can characterize the curved shape. A forward model using FEM analysis capable of representing 3D shapes is capable of
Present mantle flow in North China Craton constrained by seismic anisotropy and numerical modelling
Qu, W.; Guo, Z.; Zhang, H.; Chen, Y. J.
2017-12-01
North China Carton (NCC) has undergone complicated geodynamic processes during the Cenozoic, including the westward subduction of the Pacific plate to its east and the collision of the India-Eurasia plates to its southwest. Shear wave splitting measurements in NCC reveal distinct seismic anisotropy patterns at different tectonic blocks, that is, the predominantly NW-SE trending alignment of fast directions in the western NCC and eastern NCC, weak anisotropy within the Ordos block, and N-S fast polarization beneath the Trans-North China Orogen (TNCO). To better understand the origin of seismic anisotropy from SKS splitting in NCC, we obtain a high-resolution dynamic model that absorbs multi-geophysical observations and state-of-the-art numerical methods. We calculate the mantle flow using a most updated version of software ASPECT (Kronbichler et al., 2012) with high-resolution temperature and density structures from a recent 3-D thermal-chemical model by Guo et al. (2016). The thermal-chemical model is obtained by multi-observable probabilistic inversion using high-quality surface wave measurements, potential fields, topography, and surface heat flow (Guo et al., 2016). The viscosity is then estimated by combining the dislocation creep, diffusion creep, and plasticity, which is depended on temperature, pressure, and chemical composition. Then we calculate the seismic anisotropy from the shear deformation of mantle flow by DREX, and predict the fast direction and delay time of SKS splitting. We find that when complex boundary conditions are applied, including the far field effects of the deep subduction of Pacific plate and eastward escaping of Tibetan Plateau, our model can successfully predict the observed shear wave splitting patterns. Our model indicates that seismic anisotropy revealed by SKS is primarily resulting from the LPO of olivine due to the shear deformation from asthenospheric flow. We suggest that two branches of mantle flow may contribute to the
Constrained minimization problems for the reproduction number in meta-population models.
Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N
2018-02-14
The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015. https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017. https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.
Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population
Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.
2012-01-01
With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars
Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies
Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.
2014-12-01
Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support
DNA and dispersal models highlight constrained connectivity in a migratory marine megavertebrate
Naro-Maciel, Eugenia; Hart, Kristen M.; Cruciata, Rossana; Putman, Nathan F.
2016-01-01
Population structure and spatial distribution are fundamentally important fields within ecology, evolution, and conservation biology. To investigate pan-Atlantic connectivity of globally endangered green turtles (Chelonia mydas) from two National Parks in Florida, USA, we applied a multidisciplinary approach comparing genetic analysis and ocean circulation modeling. The Everglades (EP) is a juvenile feeding ground, whereas the Dry Tortugas (DT) is used for courtship, breeding, and feeding by adults and juveniles. We sequenced two mitochondrial segments from 138 turtles sampled there from 2006-2015, and simulated oceanic transport to estimate their origins. Genetic and ocean connectivity data revealed northwestern Atlantic rookeries as the major natal sources, while southern and eastern Atlantic contributions were negligible. However, specific rookery estimates differed between genetic and ocean transport models. The combined analyses suggest that post-hatchling drift via ocean currents poorly explains the distribution of neritic juveniles and adults, but juvenile natal homing and population history likely play important roles. DT and EP were genetically similar to feeding grounds along the southern US coast, but highly differentiated from most other Atlantic groups. Despite expanded mitogenomic analysis and correspondingly increased ability to detect genetic variation, no significant differentiation between DT and EP, or among years, sexes or stages was observed. This first genetic analysis of a North Atlantic green turtle courtship area provides rare data supporting local movements and male philopatry. The study highlights the applications of multidisciplinary approaches for ecological research and conservation.
International Nuclear Information System (INIS)
Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher
2009-01-01
We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.
Constraining SUSY models with Fittino using measurements before, with and beyond the LHC
Energy Technology Data Exchange (ETDEWEB)
Bechtle, Philip [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Desch, Klaus; Uhlenbrock, Mathias; Wienemann, Peter [Bonn Univ. (Germany). Physikalisches Inst.
2009-07-15
We investigate the constraints on Supersymmetry (SUSY) arising from available precision measurements using a global fit approach.When interpreted within minimal supergravity (mSUGRA), the data provide significant constraints on the masses of supersymmetric particles (sparticles), which are predicted to be light enough for an early discovery at the Large Hadron Collider (LHC). We provide predicted mass spectra including, for the first time, full uncertainty bands. The most stringent constraint is from the measurement of the anomalous magnetic moment of the muon. Using the results of these fits, we investigate to which precision mSUGRA and more general MSSM parameters can be measured by the LHC experiments with three different integrated luminosities for a parameter point which approximately lies in the region preferred by current data. The impact of the already available measurements on these precisions, when combined with LHC data, is also studied. We develop a method to treat ambiguities arising from different interpretations of the data within one model and provide a way to differentiate between values of different digital parameters of a model (e. g. sign({mu}) within mSUGRA). Finally, we show how measurements at a linear collider with up to 1 TeV centre-of-mass energy will help to improve precision by an order of magnitude. (orig.)
Wang, Lifeng
2015-11-11
The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.
Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model
Directory of Open Access Journals (Sweden)
K. E. Pickering
2010-01-01
Full Text Available Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN and the Long Range Lightning Detection Network (LRLDN data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.
Konrad-Schmolke, M.; Schildhauer, H.
2013-12-01
Growth and chemical composition of garnet in metamorphic rocks excellently reflect thermodynamic as well kinetic properties of the host rock during garnet growth. This valuable information can be extracted from preserved compositional growth zoning patterns in garnet. However, metamorphic rocks often contain multiple garnet generations that commonly develop as corona textures with distinct compositional core-overgrowth features. This circumstance can lead to a misinterpretation of information extracted from such grains if the age- and metamorphic relations between different garnet generations are unclear. Especially garnets from high-pressure (HP) and ultra high-pressure (UHP) rocks often preserve textures that show multiple growth stages reflected in core-overgrowth differences both in main and trace element composition and in the inclusion assemblage. Distinct growth zones often have sharp boundaries with strong compositional gradients and/or inclusion- and trace-element-enriched zones. Such growth patterns indicate episodic garnet growth as well as growth interruptions during the garnet evolution. A quantitative understanding of these distinct growth pulses enables the relationship between reaction path, age determinations in spatially controlled garnet domains or temperature-time constraints to be fully characterised. In this study we apply thermodynamic forward models to simulate garnet growth along a series of HP and UHP P-T paths, representative for subducted oceanic crust. We study garnet growth in different basaltic rock compositions and under different element fractionation scenarios in order to detect path-dependent P-T regions of limited or ceased garnet growth. Modeled data along P-T trajectories involving fractional crystallisation are assembled in P-T diagrams reflecting garnet growth in a changing bulk rock composition. Our models show that in all investigated rock compositions garnet growth along most P-T trajectories is discontinuous, pulse
Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.
2015-12-01
As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built
Zhao, Meng; Ding, Baocang
2015-03-01
This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Long-lasting context dependence constrains neural encoding models in rodent auditory cortex.
Asari, Hiroki; Zador, Anthony M
2009-11-01
Acoustic processing requires integration over time. We have used in vivo intracellular recording to measure neuronal integration times in anesthetized rats. Using natural sounds and other stimuli, we found that synaptic inputs to auditory cortical neurons showed a rather long context dependence, up to > or =4 s (tau approximately 1 s), even though sound-evoked excitatory and inhibitory conductances per se rarely lasted greater, similar 100 ms. Thalamic neurons showed only a much faster form of adaptation with a decay constant tau history to only a few hundred milliseconds reduced the predictable response component to about half that of the optimal infinite-history model. Our results demonstrate the importance of long-range temporal effects in auditory cortex and suggest a potential neural substrate for auditory processing that requires integration over timescales of seconds or longer, such as stream segregation.
Dynamical phase diagrams of a love capacity constrained prey-predator model
Simin, P. Toranj; Jafari, Gholam Reza; Ausloos, Marcel; Caiafa, Cesar Federico; Caram, Facundo; Sonubi, Adeyemi; Arcagni, Alberto; Stefani, Silvana
2018-02-01
One interesting question in love relationships is: finally, what and when is the end of this love relationship? Using a prey-predator Verhulst-Lotka-Volterra (VLV) model we imply cooperation and competition tendency between people in order to describe a "love dilemma game". We select the most simple but immediately most complex case for studying the set of nonlinear differential equations, i.e. that implying three persons, being at the same time prey and predator. We describe four different scenarios in such a love game containing either a one-way love or a love triangle. Our results show that it is hard to love more than one person simultaneously. Moreover, to love several people simultaneously is an unstable state. We find some condition in which persons tend to have a friendly relationship and love someone in spite of their antagonistic interaction. We demonstrate the dynamics by displaying flow diagrams.
Source term modelling parameters for Project-90
International Nuclear Information System (INIS)
Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.
1992-04-01
This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)
Directory of Open Access Journals (Sweden)
Jiekun Song
2016-01-01
Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.
Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.
2017-12-01
Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.
Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James
2017-10-01
Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength for BB aerosol sources. Our previous work shows that to first order, satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the smoke source strength. We now refine the satellite-snapshot method and investigate where applying simple multiplicative emission adjustment factors alone to the widely used Global Fire Emission Database version 3 emission inventory can achieve regional-scale consistency between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport model. The model and satellite AOD are compared globally, over a set of BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. Regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. We refine our approach to address physically based limitations of our earlier work (1) by expanding the number of fire cases from 124 to almost 900, (2) by using scaled reanalysis-model simulations to fill missing AOD retrievals in the MODIS observations, (3) by distinguishing the BB components of the total aerosol load from background aerosol in the near-source regions, and (4) by including emissions from fires too small to be identified explicitly in the satellite observations. The small-fire emission adjustment shows the complimentary nature of correcting for source strength and adding geographically distinct missing sources. Our analysis indicates that the method works best for fire cases where the BB fraction of total AOD is high, primarily evergreen or deciduous forests. In heavily polluted or agricultural burning regions, where smoke and background AOD values tend to be comparable, this approach
Cardenas, M. B.; Cook, P. L.; Jiang, H.; Traykovski, P.
2009-12-01
Permeable marine sediment are ubiquitous complex environments, the biogeochemistry of which are strongly coupled to hydrodynamic process above and within the sediment. The biogeochemical processes in these settings have global scale implications but are poorly understood and challenging to quantify. We present the first simulation of linked turbulent oscillatory flow of the water column, porous media flow, and solute transport in the sediment with oxygen consumption, nitrification, denitrification, and ammonification, informed by field- and/ or experimentally-derived parameters. Nitrification and denitrification were significantly impacted by advective pore water exchange between the sediment and the water column. Denitrification rates showed a maximum at intermediate permeabilities, and were negligible at high permeabilities. Denitrification rates were low, with only ~15% of total N mineralized being denitrified, although this may be increased temporarily following sediment resuspension events. Our model-estimated denitrification rates are about half of previous estimates which do not consider solute advection through the sediment. Given the critical role of sediment permeability, topography, and bottom currents in controlling denitrification rates, an improved knowledge of these factors is vital for obtaining better estimates of denitrification taking place on shelf sediment. Broad application of our approach to myriad conditions will lead to improved predictive capacity, better informed experimental and sampling design, and more holistic understanding of the biogeochemistry of permeable sediment.
Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios
Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.
2015-12-01
Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.
A transition-constrained discrete hidden Markov model for automatic sleep staging
Directory of Open Access Journals (Sweden)
Pan Shing-Tai
2012-08-01
Full Text Available Abstract Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG recordings including electroencephalograms (EEGs, electrooculograms (EOGs and electromyograms (EMGs, are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM, and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%, and the least-accurately classified stage was S1 ( Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.
An Observationally Constrained Model of a Flux Rope that Formed in the Solar Corona
James, Alexander W.; Valori, Gherardo; Green, Lucie M.; Liu, Yang; Cheung, Mark C. M.; Guo, Yang; van Driel-Gesztelyi, Lidia
2018-03-01
Coronal mass ejections (CMEs) are large-scale eruptions of plasma from the coronae of stars. Understanding the plasma processes involved in CME initiation has applications for space weather forecasting and laboratory plasma experiments. James et al. used extreme-ultraviolet (EUV) observations to conclude that a magnetic flux rope formed in the solar corona above NOAA Active Region 11504 before it erupted on 2012 June 14 (SOL2012-06-14). In this work, we use data from the Solar Dynamics Observatory (SDO) to model the coronal magnetic field of the active region one hour prior to eruption using a nonlinear force-free field extrapolation, and find a flux rope reaching a maximum height of 150 Mm above the photosphere. Estimations of the average twist of the strongly asymmetric extrapolated flux rope are between 1.35 and 1.88 turns, depending on the choice of axis, although the erupting structure was not observed to kink. The decay index near the apex of the axis of the extrapolated flux rope is comparable to typical critical values required for the onset of the torus instability, so we suggest that the torus instability drove the eruption.
Constrained Vapor Bubble Experiment
Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi
2002-11-01
Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.
A transition-constrained discrete hidden Markov model for automatic sleep staging.
Pan, Shing-Tai; Kuo, Chih-En; Zeng, Jian-Hong; Liang, Sheng-Fu
2012-08-21
Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG) recordings including electroencephalograms (EEGs), electrooculograms (EOGs) and electromyograms (EMGs), are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K) rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM), and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%), and the least-accurately classified stage was S1 (<34%). In the majority of cases, S1 was classified as Wake (21%), S2 (33%) or REM sleep (12%), consistent with previous studies. However, the total time of S1 in the 20 all-night sleep recordings was less than 4%. The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.
Comfort constrains graphic workspace: test results of a 3D forearm model.
Schillings, J J; Thomassen, A J; Meulenbroek, R G
2000-01-01
Human movement performance is subject to many physical and psychological constraints. Analyses of these constraints may not only improve our understanding of the performance aspects that subjects need to keep under continuous control, but may also shed light on the possible origins of specific behavioral preferences that people display in motor tasks. The goal of the present paper is to make an empirical contribution here. In a recent simulation study, we reported effects of pen-grip and forearm-posture constraints on the spatial characteristics of the pen tip's workspace in drawing. The effects concerned changes in the location, size, and orientation of the reachable part of the writing plane, as well as variations in the computed degree of comfort in the hand and finger postures required to reach the various parts of this area. The present study is aimed at empirically evaluating to what extent these effects influence subjects' graphic behavior in a simple, free line-drawing task. The task involved the production of small back-and-forth drawing movements in various directions, to be chosen randomly under three forearm-posture and five pen-grip conditions. The observed variations in the subjects' choice of starting positions showed a high level of agreement with those of the simulated graphic-area locations, showing that biomechanically defined comfort of starting postures is indeed a determinant of the selection of starting points. Furthermore, between-condition rotations in the frequency distributions of the realized stroke directions corresponded to the simulation results, which again confirms the importance of comfort in directional preferences. It is concluded that postural rather than spatial constraints primarily affect subjects' preferences for starting positions and stroke directions in graphic motor performance. The relevance of the present modelling approach and its results for the broader field of complex motor behavior, including the manipulation of
Ding, J.; Johnson, E. A.; Martin, Y. E.
2017-12-01
Leaf is the basic production unit of plants. Water is the most critical resource of plants. Its availability controls primary productivity of plants by affecting leaf carbon budget. To avoid the damage of cavitation from lowering vein water potential t caused by evapotranspiration, the leaf must increase the stomatal resistance to reduce evapotranspiration rate. This comes at the cost of reduced carbon fixing rate as increasing stoma resistance meanwhile slows carbon intake rate. Studies suggest that stoma will operate at an optimal resistance to maximize the carbon gain with respect to water. Different plant species have different leaf shapes, a genetically determined trait. Further, on the same plant leaf size can vary many times in size that is related to soil moisture, an indicator of water availability. According to metabolic scaling theory, increasing leaf size will increase total xylem resistance of vein, which may also constrain leaf carbon budget. We present a Constrained Maximization Model of leaf (leaf CMM) that incorporates metabolic theory into the coupling of evapotranspiration and carbon fixation to examine how leaf size, stoma resistance and maximum net leaf primary productivity change with petiole xylem water potential. The model connects vein network structure to leaf shape and use the difference between petiole xylem water potential and the critical minor vein cavitation forming water potential as the budget. The CMM shows that both maximum net leaf primary production and optimal leaf size increase with petiole xylem water potential while optimal stoma resistance decreases. Narrow leaf has overall lower optimal leaf size and maximum net leaf carbon gain and higher optimal stoma resistance than those of broad leaf. This is because with small width to length ratio, total xylem resistance increases faster with leaf size. Total xylem resistance of narrow leaf increases faster with leaf size causing higher average and marginal cost of xylem water
Bland, Michael T.; McKinnon, William B; Schenk, Paul M.
2015-01-01
The Cassini spacecraft’s Composite Infrared Spectrometer (CIRS) has observed at least 5 GW of thermal emission at Enceladus’ south pole. The vast majority of this emission is localized on the four long, parallel, evenly-spaced fractures dubbed tiger stripes. However, the thermal emission from regions between the tiger stripes has not been determined. These spatially localized regions have a unique morphology consisting of short-wavelength (∼1 km) ridges and troughs with topographic amplitudes of ∼100 m, and a generally ropy appearance that has led to them being referred to as “funiscular terrain.” Previous analysis pursued the hypothesis that the funiscular terrain formed via thin-skinned folding, analogous to that occurring on a pahoehoe flow top (Barr, A.C., Preuss, L.J. [2010]. Icarus 208, 499–503). Here we use finite element modeling of lithospheric shortening to further explore this hypothesis. Our best-case simulations reproduce funiscular-like morphologies, although our simulated fold wavelengths after 10% shortening are 30% longer than those observed. Reproducing short-wavelength folds requires high effective surface temperatures (∼185 K), an ice lithosphere (or high-viscosity layer) with a low thermal conductivity (one-half to one-third that of intact ice or lower), and very high heat fluxes (perhaps as great as 400 mW m−2). These conditions are driven by the requirement that the high-viscosity layer remain extremely thin (≲200 m). Whereas the required conditions are extreme, they can be met if a layer of fine grained plume material 1–10 m thick, or a highly fractured ice layer >50 m thick insulates the surface, and the lithosphere is fractured throughout as well. The source of the necessary heat flux (a factor of two greater than previous estimates) is less obvious. We also present evidence for an unusual color/spectral character of the ropy terrain, possibly related to its unique surface texture. Our simulations demonstrate
Energy Technology Data Exchange (ETDEWEB)
Pianelo, L.
2001-09-01
Matching procedures are often used in reservoir production to improve geological models. In reservoir engineering, history matching leads to update petrophysical parameters in fluid flow simulators to fit the results of the calculations with observed data. In the same line, seismic parameters are inverted to allow the numerical recovery of seismic acquisitions. However, it is well known that these inverse problems are poorly constrained. The idea of this original work is to simultaneous match both the permeability and the acoustic impedance of the reservoir, for an enhancement of the resulting geological model. To do so, both parameters are linked using either observed relations and/or the classic Wyllie (porosity impedance) and Carman-Kozeny (porosity-permeability) relationships. Hence production data are added to the seismic match, and seismic observations are used for the permeability recovery. The work consists in developing numerical prototypes of a 3-D fluid flow simulator and a 3-D seismic acquisition simulator. Then, in implementing the coupled inversion loop of the permeability and the acoustic impedance of the two models. We can hence test our theory on a 3-D realistic case. Comparison of the coupled matching with the two classical ones demonstrates the efficiency of our method. We reduce significantly the number of possible solutions, and then the number of scenarios. In addition to that, the augmentation of information leads to a natural improvement of the obtained models, especially in the spatial localization of the permeability contrasts. The improvement is significant, at the same time in the distribution of the two inverted parameters, and in the rapidity of the operation. This work is an important step in a way of data integration, and leads to a better reservoir characterization. This original algorithm could also be useful in reservoir monitoring, history matching and in optimization of production. This new and original method is patented and
Energy Technology Data Exchange (ETDEWEB)
Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles
Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin
2016-05-01
Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.
Luijendijk, Elco; von Hagke, Christoph; Hindle, David
2017-04-01
Due to a wealth of geological and thermochronology data the northern foreland basin of the European Alps is an ideal natural laboratory for understanding the dynamics of foreland basins and their interaction with surface and geodynamic processes. The northern foreland basin of the Alps has been exhumed since the Miocene. The timing, rate and cause of this phase of exhumation are still enigmatic. We compile all available thermochronology and organic maturity data and use a new thermal history model, PyBasin, to quantify the rate and timing of exhumation that can explain these data. In addition we quantify the amount of tectonic exhumation using a new kinematic model for the part of the basin that is passively moved above the detachment of the Jura Mountains. Our results show that the vitrinite reflectance, apatite fission track data and cooling rates show no clear difference between the thrusted and folded part of the foreland basin and the undeformed part of the foreland basin. The undeformed plateau Molasse shows a high rate of cooling during the Neogene of 40 to 100 °C, which is equal to >1.0 km of exhumation. Calculated rates of exhumation suggest that drainage reorganization can only explain a small part of the observed exhumation and cooling. Similarly, tectonic transport over a detachment ramp cannot explain the magnitude, timing and wavelength of the observed cooling signal. We conclude that the observed cooling rates suggest large wavelength exhumation that is probably caused by lithospheric-scale processes. In contrast to previous studies we find that the timing of exhumation is poorly constrained. Uncertainty analysis shows that models with timing starting as early as 12 Ma or as late as 2 Ma can all explain the observed data.
Directory of Open Access Journals (Sweden)
Jing Liu
2017-11-01
Full Text Available In this study, an interval fuzzy-stochastic chance-constrained programming based energy-water nexus (IFSCP-WEN model is developed for planning electric power system (EPS. The IFSCP-WEN model can tackle uncertainties expressed as possibility and probability distributions, as well as interval values. Different credibility (i.e., γ levels and probability (i.e., qi levels are set to reflect relationships among water supply, electricity generation, system cost, and constraint-violation risk. Results reveal that different γ and qi levels can lead to a changed system cost, imported electricity, electricity generation, and water supply. Results also disclose that the study EPS would tend to the transition from coal-dominated into clean energy-dominated. Gas-fired would be the main electric utility to supply electricity at the end of the planning horizon, occupying [28.47, 30.34]% (where 28.47% and 30.34% present the lower bound and the upper bound of interval value, respectively of the total electricity generation. Correspondingly, water allocated to gas-fired would reach the highest, occupying [33.92, 34.72]% of total water supply. Surface water would be the main water source, accounting for more than [40.96, 43.44]% of the total water supply. The ratio of recycled water to total water supply would increase by about [11.37, 14.85]%. Results of the IFSCP-WEN model present its potential for sustainable EPS planning by co-optimizing energy and water resources.
McFadden, David G.; Vernon, Amanda; Santiago, Philip M.; Martinez-McFaline, Raul; Bhutkar, Arjun; Crowley, Denise M.; McMahon, Martin; Sadow, Peter M.; Jacks, Tyler
2014-01-01
Anaplastic thyroid carcinoma (ATC) has among the worst prognoses of any solid malignancy. The low incidence of the disease has in part precluded systematic clinical trials and tissue collection, and there has been little progress in developing effective therapies. v-raf murine sarcoma viral oncogene homolog B (BRAF) and tumor protein p53 (TP53) mutations cooccur in a high proportion of ATCs, particularly those associated with a precursor papillary thyroid carcinoma (PTC). To develop an adult-onset model of BRAF-mutant ATC, we generated a thyroid-specific CreER transgenic mouse. We used a Cre-regulated BrafV600E mouse and a conditional Trp53 allelic series to demonstrate that p53 constrains progression from PTC to ATC. Gene expression and immunohistochemical analyses of murine tumors identified the cardinal features of human ATC including loss of differentiation, local invasion, distant metastasis, and rapid lethality. We used small-animal ultrasound imaging to monitor autochthonous tumors and showed that treatment with the selective BRAF inhibitor PLX4720 improved survival but did not lead to tumor regression or suppress signaling through the MAPK pathway. The combination of PLX4720 and the mapk/Erk kinase (MEK) inhibitor PD0325901 more completely suppressed MAPK pathway activation in mouse and human ATC cell lines and improved the structural response and survival of ATC-bearing animals. This model expands the limited repertoire of autochthonous models of clinically aggressive thyroid cancer, and these data suggest that small-molecule MAPK pathway inhibitors hold clinical promise in the treatment of advanced thyroid carcinoma. PMID:24711431
Ellis, J. H.; McBean, E. A.; Farquhar, G. J.
A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO 2 emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty (LPUU), Chance-Constrained Programming (CCP) and Stochastic Linear Programming (SLP). A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. Complete colinearity assumes complete dependence between transfer coefficients. Complete noncolinearity assumes complete independence. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.
Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang
2017-12-15
An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
N. Ringa
2014-12-01
Full Text Available Many countries have eliminated foot and mouth disease (FMD, but outbreaks remain common in other countries. Rapid development of international trade in animals and animal products has increased the risk of disease introduction to FMD-free countries. Most mathematical models of FMD are tailored to settings that are normally disease-free, and few models have explored the impact of constrained control measures in a ‘near-endemic’ spatially distributed host population subject to frequent FMD re-introductions from nearby endemic wild populations, as characterizes many low-income, resource-limited countries. Here we construct a pair approximation model of FMD and investigate the impact of constraints on total vaccine supply for prophylactic and ring vaccination, and constraints on culling rates and cumulative culls. We incorporate natural immunity waning and vaccine waning, which are important factors for near-endemic populations. We find that, when vaccine supply is sufficiently limited, the optimal approach for minimizing cumulative infections combines rapid deployment of ring vaccination during outbreaks with a contrasting approach of careful rationing of prophylactic vaccination over the year, such that supplies last as long as possible (and with the bulk of vaccines dedicated toward prophylactic vaccination. Thus, for optimal long-term control of the disease by vaccination in near-endemic settings when vaccine supply is limited, it is best to spread out prophylactic vaccination as much as possible. Regardless of culling constraints, the optimal culling strategy is rapid identification of infected premises and their immediate contacts at the initial stages of an outbreak, and rapid culling of infected premises and farms deemed to be at high risk of infection (as opposed to culling only the infected farms. Optimal culling strategies are similar when social impact is the outcome of interest. We conclude that more FMD transmission models should
Anomalous gauge theories as constrained Hamiltonian systems
International Nuclear Information System (INIS)
Fujiwara, T.
1989-01-01
Anomalous gauge theories considered as constrained systems are investigated. The effects of chiral anomaly on the canonical structure are examined first for nonlinear σ-model and later for fermionic theory. The breakdown of the Gauss law constraints and the anomalous commutators among them are studied in a systematic way. An intrinsic mass term for gauge fields makes it possible to solve the Gauss law relations as second class constraints. Dirac brackets between the time components of gauge fields are shown to involve anomalous terms. Based upon the Ward-Takahashi identities for gauge symmetry, we investigate anomalous fermionic theory within the framework of path integral approach. (orig.)
Gray, William G; Miller, Cass T
2009-08-01
This work is the seventh in a series that introduces and employs the thermodynamically constrained averaging theory (TCAT) for modeling flow and transport in multiscale porous medium systems. This paper expands the previous analyses in the series by developing models at a scale where spatial variations within the system are not considered. Thus the time variation of variables averaged over the entire system is modeled in relation to fluxes at the boundary of the system. This implementation of TCAT makes use of conservation equations for mass, momentum, and energy as well as an entropy balance. Additionally, classical irreversible thermodynamics is assumed to hold at the microscale and is averaged to the megascale, or system scale. The fact that the local equilibrium assumption does not apply at the megascale points to the importance of obtaining closure relations that account for the large-scale manifestation of small-scale variations. Example applications built on this foundation are suggested to stimulate future work.
Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul
2010-12-13
Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.
Ringa, N.; Bauch, C.T.
2014-01-01
Many countries have eliminated foot and mouth disease (FMD), but outbreaks remain common in other countries. Rapid development of international trade in animals and animal products has increased the risk of disease introduction to FMD-free countries. Most mathematical models of FMD are tailored to settings that are normally disease-free, and few models have explored the impact of constrained control measures in a ‘near-endemic’ spatially distributed host population subject to frequent FMD re-...
Zoeller, G.
2017-12-01
Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.
Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz
2017-07-01
This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the style="" class="text">LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.
Lan, R.; Cohen, J. B.
2017-12-01
Biomass burning over the South, South East and East Asian Monsoon regions, is a crucial contributor to the total local aerosol loading. Furthermore, the impact of the ITCZ, and Monsoonal circulation patterns coupled with complex topography also have a prominent impact on the aerosol loading throughout much of the Northern Hemisphere. However, at the present time, biomass burning emissions are highly underestimated over this region, in part due to under-reported emissions in space and time, and in part due to an incomplete understanding of the physics and chemistry of the aerosols emitted in fires and formed downwind from them. Hence, a better understanding of the four-dimensional source distribution, plume rise, and in-situ processing, in particular in regions with significant quantities of urban air pollutants, is essential to advance our knowledge of this problem. This work uses a new modeling methodology based on the simultaneous constraints of measured AOD and some trace gasses over the region. The results of the 4-D constrained emissions are further expanded upon using different fire plume height rise and in-situ processing assumptions. Comparisons between the results and additional ground-based and remotely sensed measurements, including AERONET, CALIOP, and NOAA and other ground networks are included. The end results reveal a trio of insights into the nonlinear processes most-important to understand the impacts of biomass burning in this part of the world. Model-measurement comparisons are found to be consistent during the typical burning years of 2016. First, the model performs better under the new emissions representations, than it does using any of the standard hotspot based approaches currently employed by the community. Second, long range transport and mixing between the boundary layer and free troposphere contribute to the spatial-temporal variations. Third, we indicate some source regions that are new, either because of increased urbanization, or of
Constraining entropic cosmology
Energy Technology Data Exchange (ETDEWEB)
Koivisto, Tomi S. [Institute for Theoretical Physics and the Spinoza Institute, Utrecht University, Leuvenlaan 4, Postbus 80.195, 3508 TD Utrecht (Netherlands); Mota, David F. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zumalacárregui, Miguel, E-mail: t.s.koivisto@uu.nl, E-mail: d.f.mota@astro.uio.no, E-mail: miguelzuma@icc.ub.edu [Institute of Cosmos Sciences (ICC-IEEC), University of Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)
2011-02-01
It has been recently proposed that the interpretation of gravity as an emergent, entropic phenomenon might have nontrivial implications to cosmology. Here several such approaches are investigated and the underlying assumptions that must be made in order to constrain them by the BBN, SneIa, BAO and CMB data are clarified. Present models of inflation or dark energy are ruled out by the data. Constraints are derived on phenomenological parameterizations of modified Friedmann equations and some features of entropic scenarios regarding the growth of perturbations, the no-go theorem for entropic inflation and the possible violation of the Bekenstein bound for the entropy of the Universe are discussed and clarified.
Gray, William G; Miller, Cass T
2009-05-01
This work is the fifth in a series of papers on the thermodynamically constrained averaging theory (TCAT) approach for modeling flow and transport phenomena in multiscale porous medium systems. The general TCAT framework and the mathematical foundation presented in previous works are used to develop models that describe species transport and single-fluid-phase flow through a porous medium system in varying physical regimes. Classical irreversible thermodynamics formulations for species in fluids, solids, and interfaces are developed. Two different approaches are presented, one that makes use of a momentum equation for each entity along with constitutive relations for species diffusion and dispersion, and a second approach that makes use of a momentum equation for each species in an entity. The alternative models are developed by relying upon different approaches to constrain an entropy inequality using mass, momentum, and energy conservation equations. The resultant constrained entropy inequality is simplified and used to guide the development of closed models. Specific instances of dilute and non-dilute systems are examined and compared to alternative formulation approaches.
Directory of Open Access Journals (Sweden)
Nicholas J. Sexton
2014-07-01
Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.
Rogozhina, I.; Hagedoorn, J. M.; Martinec, Z.; Fleming, K.; Thomas, M.
2012-04-01
In recent years, a number of studies have addressed the problem of constraining subglacial geothermal heat flow (SGHF) patterns within the context of thermodynamic ice-sheet modeling. This study reports on the potential of today's ice-sheet modeling methods and, more importantly, their limitations, with respect to reproducing the thermal states of the present-day large-scale ice sheets. So far, SGHF-related ice-sheet studies have suggested two alternative approaches for obtaining the present-day ice-sheet temperature distribution: (i) paleoclimatic simulations driven by the past surface temperature reconstructions, and (ii) fixed-topography steady-state simulations driven by the present-day climate conditions. Both approaches suffer from a number of shortcomings that are not easily amended. Paleoclimatic simulations account for past climate variations and produce more realistic present-day ice temperature distribution. However, in some areas, our knowledge of past climate forcing is subject to larger uncertainties that exert a significant influence on both the modeled basal temperatures and ice thicknesses, as demonstrated by our sensitivity case study applied to the Greenland Ice Sheet (GIS). In some regions of the GIS, for example southern Greenland, the poorly known climate forcing causes a significant deviation of the modeled ice thickness from the measured values (up to 200 meters) and makes it impossible to fit the measured basal temperature and gradient unless the climate history forcing is improved. Since present-day ice thickness is a product of both climate history and SGHF forcing, uncertainties in either boundary condition integrated over the simulation time will lead to a misfit between the modeled and observed ice sheets. By contrast, the fixed-topography steady-state approach allows one to avoid the above-mentioned transient effects and fit perfectly the observed present-day ice surface topography. However, the temperature distribution resulting from
Source Term Model for Fine Particle Resuspension from Indoor Surfaces
National Research Council Canada - National Science Library
Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W
2008-01-01
This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...
Hu, X.; Li, X.; Lu, L.
2017-12-01
Land use/cover change (LUCC) is an important subject in the research of global environmental change and sustainable development, while spatial simulation on land use/cover change is one of the key content of LUCC and is also difficult due to the complexity of the system. The cellular automata (CA) model had an irreplaceable role in simulating of land use/cover change process due to the powerful spatial computing power. However, the majority of current CA land use/cover models were binary-state model that could not provide more general information about the overall spatial pattern of land use/cover change. Here, a multi-state logistic-regression-based Markov cellular automata (MLRMCA) model and a multi-state artificial-neural-network-based Markov cellular automata (MANNMCA) model were developed and were used to simulate complex land use/cover evolutionary process in an arid region oasis city constrained by water resource and environmental policy change, the Zhangye city during the period of 1990-2010. The results indicated that the MANNMCA model was superior to MLRMCA model in simulated accuracy. These indicated that by combining the artificial neural network with CA could more effectively capture the complex relationships between the land use/cover change and a set of spatial variables. Although the MLRMCA model were also some advantages, the MANNMCA model was more appropriate for simulating complex land use/cover dynamics. The two proposed models were effective and reliable, and could reflect the spatial evolution of regional land use/cover changes. These have also potential implications for the impact assessment of water resources, ecological restoration, and the sustainable urban development in arid areas.
Cabell, Randolph H.; Gibbs, Gary P.
2000-01-01
make the controller adaptive. For example, a mathematical model of the plant could be periodically updated as the plant changes, and the feedback gains recomputed from the updated model. To be practical, this approach requires a simple plant model that can be updated quickly with reasonable computational requirements. A recent paper by the authors discussed one way to simplify a feedback controller, by reducing the number of actuators and sensors needed for good performance. The work was done on a tensioned aircraft-style panel excited on one side by TBL flow in a low speed wind tunnel. Actuation was provided by a piezoelectric (PZT) actuator mounted on the center of the panel. For sensing, the responses of four accelerometers, positioned to approximate the response of the first radiation mode of the panel, were summed and fed back through the controller. This single input-single output topology was found to have nearly the same noise reduction performance as a controller with fifteen accelerometers and three PZT patches. This paper extends the previous results by looking at how constrained layer damping (CLD) on a panel can be used to enhance the performance of the feedback controller thus providing a more robust and efficient hybrid active/passive system. The eventual goal is to use the CLD to reduce sound radiation at high frequencies, then implement a very simple, reduced order, low sample rate adaptive controller to attenuate sound radiation at low frequencies. Additionally this added damping smoothes phase transitions over the bandwidth which promotes robustness to natural frequency shifts. Experiments were conducted in a transmission loss facility on a clamped-clamped aluminum panel driven on one side by a loudspeaker. A generalized predictive control (GPC) algorithm, which is suited to online adaptation of its parameters, was used in single input-single output and multiple input-single output configurations. Because this was a preliminary look at the potential
Yen, Haw; White, Michael J.; Arnold, Jeffrey G.; Keitzer, S. Conor; Johnson, Mari-Vaughn V; Atwood, Jay D.; Daggupati, Prasad; Herbert, Matthew E.; Sowa, Scott P.; Ludsin, Stuart A.; Robertson, Dale M.; Srinivasan, Raghavan; Rewa, Charles A.
2016-01-01
Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT2012) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation.
Nonlinear Kalman Filtering in Affine Term Structure Models
DEFF Research Database (Denmark)
Christoffersen, Peter; Dorion, Christian; Jacobs, Kris
When the relationship between security prices and state variables in dynamic term structure models is nonlinear, existing studies usually linearize this relationship because nonlinear fi…ltering is computationally demanding. We conduct an extensive investigation of this linearization and analyze ...... in fi…xed income pricing with nonlinear relationships between the state vector and the observations, such as the estimation of term structure models using coupon bonds and the estimation of quadratic term structure models....
Resovsky, A.; Luyssaert, S.; Guenet, B.; Peylin, P.; Lansø, A. S.; Vuichard, N.; Messina, P.; Smith, B.; Ryder, J.; Naudts, K.; Chen, Y.; Otto, J.; McGrath, M.; Valade, A.
2017-12-01
Understanding coupling between carbon (C) and nitrogen (N) cycling in forest ecosystems is key to predicting global change. Numerous experimental studies have demonstrated the positive response of stand-level photosynthesis and net primary production (NPP) to atmospheric CO2 enrichment, while N availability has been shown to exert an important control on the timing and magnitude of such responses. However, several factors complicate efforts to precisely represent ecosystem-level C and N cycling in the current generation of land surface models (LSMs), including sparse in-situ data, uncertainty with regard to key state variables and disregard for the effects of natural and anthropogenic forest management. In this study, we incorporate empirical data from N-fertilization experiments at two long-term manipulation sites in Sweden to improve the representation of C and N interaction in the ORCHIDEE land surface model. Our version of the model represents the union of two existing ORCHIDEE branches: 1) ORCHIDEE-CN, which resolves processes related to terrestrial C and N cycling, and 2) ORCHIDEE-CAN, which integrates a multi-layer canopy structure and includes representation of forest management practices. Using this new model branch (referred to as ORCHIDEE-CN-CAN), we aim to replicate the growth patterns of managed forests both with and without N limitations. Our hope is that the results, in combination with measurements of various ecosystem parameters (such as soil N) will facilitate LSM optimization, inform future model development, and reduce structural uncertainty in global change predictions.
Sympletic quantization of constrained systems
Energy Technology Data Exchange (ETDEWEB)
Barcelos-Neto, J.; Wotzasek, C. (Inst. de Fisica, Univ. Federal do Rio de Janeiro, Caixa Postal 68528, 21945 Rio de Janeiro (BR))
1992-06-21
In this paper it is shown that the symplectic two-form, which defines the geometrical structure of a constrained theory in the Faddeev-Jackiw approach, may be brought into a non-degenerated form, by an iterative implementation of the existing constraints. The resulting generalized brackets coincide with those obtained by the Dirac bracket approach, if the constrained system under investigation presents only second-class constraints. For gauge theories, a symmetry breaking term must be supplemented to bring the symplectic form into a non-singular configuration. At present, the singular symplectic two-form provides directly the generators of the time independent gauge transformations.
Kirschbaum, Miko U F; Rutledge, Susanna; Kuijper, Isoude A; Mudge, Paul L; Puche, Nicolas; Wall, Aaron M; Roach, Chris G; Schipper, Louis A; Campbell, David I
2015-04-15
We used two years of eddy covariance (EC) measurements collected over an intensively grazed dairy pasture to better understand the key drivers of changes in soil organic carbon stocks. Analysing grazing systems with EC measurements poses significant challenges as the respiration from grazing animals can result in large short-term CO2 fluxes. As paddocks are grazed only periodically, EC observations derive from a mosaic of paddocks with very different exchange rates. This violates the assumptions implicit in the use of EC methodology. To test whether these challenges could be overcome, and to develop a tool for wider scenario testing, we compared EC measurements with simulation runs with the detailed ecosystem model CenW 4.1. Simulations were run separately for 26 paddocks around the EC tower and coupled to a footprint analysis to estimate net fluxes at the EC tower. Overall, we obtained good agreement between modelled and measured fluxes, especially for the comparison of evapotranspiration rates, with model efficiency of 0.96 for weekly averaged values of the validation data. For net ecosystem productivity (NEP) comparisons, observations were omitted when cattle grazed the paddocks immediately around the tower. With those points omitted, model efficiencies for weekly averaged values of the validation data were 0.78, 0.67 and 0.54 for daytime, night-time and 24-hour NEP, respectively. While not included for model parameterisation, simulated gross primary production also agreed closely with values inferred from eddy covariance measurements (model efficiency of 0.84 for weekly averages). The study confirmed that CenW simulations could adequately model carbon and water exchange in grazed pastures. It highlighted the critical role of animal respiration for net CO2 fluxes, and showed that EC studies of grazed pastures need to consider the best approach of accounting for this important flux to avoid unbalanced accounting. Copyright © 2015. Published by Elsevier B.V.
Ringa, N; Bauch, C T
2014-12-01
Many countries have eliminated foot and mouth disease (FMD), but outbreaks remain common in other countries. Rapid development of international trade in animals and animal products has increased the risk of disease introduction to FMD-free countries. Most mathematical models of FMD are tailored to settings that are normally disease-free, and few models have explored the impact of constrained control measures in a 'near-endemic' spatially distributed host population subject to frequent FMD re-introductions from nearby endemic wild populations, as characterizes many low-income, resource-limited countries. Here we construct a pair approximation model of FMD and investigate the impact of constraints on total vaccine supply for prophylactic and ring vaccination, and constraints on culling rates and cumulative culls. We incorporate natural immunity waning and vaccine waning, which are important factors for near-endemic populations. We find that, when vaccine supply is sufficiently limited, the optimal approach for minimizing cumulative infections combines rapid deployment of ring vaccination during outbreaks with a contrasting approach of careful rationing of prophylactic vaccination over the year, such that supplies last as long as possible (and with the bulk of vaccines dedicated toward prophylactic vaccination). Thus, for optimal long-term control of the disease by vaccination in near-endemic settings when vaccine supply is limited, it is best to spread out prophylactic vaccination as much as possible. Regardless of culling constraints, the optimal culling strategy is rapid identification of infected premises and their immediate contacts at the initial stages of an outbreak, and rapid culling of infected premises and farms deemed to be at high risk of infection (as opposed to culling only the infected farms). Optimal culling strategies are similar when social impact is the outcome of interest. We conclude that more FMD transmission models should be developed that are
Sofie Lansø, Anne; Resovsky, Alex; Guenet, Bertrand; Peylin, Philippe; Vuichard, Nicolas; Messina, Palmira; Smith, Benjamin; Ryder, James; Naudts, Kim; Chen, Yiying; Otto, Juliane; McGrath, Matthew; Valade, Aude; Luyssaert, Sebastiaan
2017-04-01
Understanding the coupling between carbon (C) and nitrogen (N) cycling in terrestrial ecosystems is key to predicting global change. While numerous experimental studies have demonstrated the positive response of stand-level photosynthesis and net primary production (NPP) to atmospheric CO2 enrichment, N availability has been shown to exert an important control on the timing and magnitude of such responses. Forest management is also a key driver of C storage in such ecosystems but interactions between forest management and the N cycle as a C storage driver are not well known. In this study, we use data from N-fertilization experiments at two long-term forest manipulation sites in Sweden to inform and improve the representation of C and N interaction in the ORCHIDEE land surface model. Our version of the model represents the union of two ORCHIDEE branches; 1) ORCHIDEE-CN, which resolves processes related to terrestrial C and N cycling, and 2) ORCHIDEE-CAN, which integrates a multi-layer canopy structure and includes representation of forest management practices. Using this new model branch, referred to as ORCHIDEE-CN-CAN, we simulate the growth patterns of managed forests both with and without N limitations. Combining our simulated results with measurements of various ecosystem parameters (such as soil N) will aid in ecosystem model development, reducing structural uncertainty and optimizing parameter settings in global change simulations.
Loeptien, Ulrike; Dietze, Heiner
2014-05-01
In order to constrain potential feedbacks in the climate system, simple pelagic biogeochemical models (BGCMs) are coupled to 3-dimensional ocean-atmosphere models. These so-called earth system models are frequently applied to calculate climate projections. All BGCs rely on a set of rather uncertain parameters. Among them are generally the Michaelis Menten (MM) constants, utilized in the hyperbolic MM- formulation (which specifies the limiting effect of light and nutrients on carbon assimilation by autotrophic phytoplankton). All model parameters are typically tuned in rather subjective trial-and-error exercises where the parameters are changed manually until a "reasonable" similarity with observed standing stocks is achieved. In the present study, we explore with twin experiments (or synthetic ``observations") the demands on observations that would allow for a more objective estimation of model parameters. These parameter retrieval experiments are based on ``perfect" (synthetic) observations which we, step by step, distort to approach realistic conditions. Finally, we confirm our findings with real-world observations. In summary, we find that even modest noise (10%) inherent to observations may hinder the parameter retrieval already. Particularly, the MM constants are hard to constrain. This is of concern since the MM parameters are key to the model`s sensitivity to anticipated changes of the external conditions.
Directory of Open Access Journals (Sweden)
S. B. Henry
2013-02-01
Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2
Modelled long term trends of surface ozone over South Africa
CSIR Research Space (South Africa)
Naidoo, M
2011-10-01
Full Text Available timescale seeks to provide a spatially comprehensive view of trends while also creating a baseline for comparisons with future projections of air quality through the forcing of air quality models with modelled predicted long term meteorology. Previous...
Sharp spatially constrained inversion
DEFF Research Database (Denmark)
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
A viable D-term hybrid inflation model
Kadota, Kenji; Kobayashi, Tatsuo; Sumita, Keigo
2017-11-01
We propose a new model of the D-term hybrid inflation in the framework of supergravity. Although our model introduces, analogously to the conventional D-term inflation, the inflaton and a pair of scalar fields charged under a U(1) gauge symmetry, we study the logarithmic and exponential dependence on the inflaton field, respectively, for the Kähler and superpotential. This results in a characteristic one-loop scalar potential consisting of linear and exponential terms, which realizes the small-field inflation dominated by the Fayet-Iliopoulos term. With the reasonable values for the coupling coefficients and, in particular, with the U(1) gauge coupling constant comparable to that of the Standard Model, our D-term inflation model can solve the notorious problems in the conventional D-term inflation, namely, the CMB constraints on the spectral index and the generation of cosmic strings.
Tree Memory Networks for Modelling Long-term Temporal Dependencies
Fernando, Tharindu; Denman, Simon; McFadyen, Aaron; Sridharan, Sridha; Fookes, Clinton
2017-01-01
In the domain of sequence modelling, Recurrent Neural Networks (RNN) have been capable of achieving impressive results in a variety of application areas including visual question answering, part-of-speech tagging and machine translation. However this success in modelling short term dependencies has not successfully transitioned to application areas such as trajectory prediction, which require capturing both short term and long term relationships. In this paper, we propose a Tree Memory Networ...
Ritzinger, B. T.; Glen, J. M. G.; Athens, N. D.; Denton, K. M.; Bouligand, C.
2015-12-01
Regionally continuous Cenozoic rocks in the Basin and Range that predate the onset of major mid-Miocene extension provide valuable insight into the sequence of faulting and magnitude of extension. An exceptional example of this is Caetano caldera, located in north-central Nevada, that formed during the eruption of the Caetano Tuff at the Eocene-Oligocene transition. The caldera and associated deposits, as well as conformable caldera-filling sedimentary and volcanic units allow for the reconstruction of post Oligocene extensional faulting. Extensive mapping and geochronologic, geochemical and paleomagnetic analyses have been conducted over the last decade to help further constrain the eruptive and extensional history of the Caetano caldera and associated deposits. Gravity and magnetic data, that highlight contrasts in density and magnetic properties (susceptibility and remanence), respectively, are useful for mapping and modeling structural and lithic discontinuities. By combining existing gravity and aeromagnetic data with newly collected high-resolution gravity data, we are performing detailed potential field modeling to better characterize the subsurface within and surrounding the caldera. Modeling is constrained by published geologic map and cross sections and by new rock properties for these units determined from oriented drill core and hand samples collected from outcrops that span all of the major rock units in the study area. These models will enable us to better map the margins of the caldera and more accurately determine subsurface lithic boundaries and complex fault geometries, as well as aid in refining estimates of the magnitude of extension across the caldera. This work highlights the value in combining geologic and geophysical data to build an integrated structural model to help characterize the subsurface and better constrain the extensional tectonic history if this part of the Great Basin.
Auterives, C.; Lange, H.; Leblois, E.; Beldring, S.; Gottschalk, L.
2009-04-01
In high-latitude areas, landscapes with flat or moderate relief areas usually contain lakes and mires. The identification of flowpaths in such areas is a difficult issue. The increasing availability of high resolution topography from airborne Lidar measurements offers new opportunities for automatic or semi-automatic channel extraction from DEMs in small watersheds, substantially outperforming the hydrographic network in conventional digital maps. This work describes an approach to automatically extract the spatial structure of a drainage network and thereby produce a partition of the catchment into drainage sub-basin polygons from Lidar data. We demonstrate the procedure for the test case of the 4.8 km2 Langtjern watershed in southeast Norway. It represents a typical boreal low-productive landscape with a mosaic of forests, mires and lakes. Here, areal cover and local slope are intimately linked: lakes and ponds dominate in the flattest areas, low slope areas are occupied by peatbogs, and the steepest parts of the catchment are covered by forest. The results of the extraction, the hydrographic network, and the identification of bogs and lakes, are input to a distributed hydrological model (DEW model system, Beldring, 2008), constraining the model structure to a large extent. An explicit description of the drainage network and the physical landscape properties in the watershed is warranted, providing the capability to predict hydrological state variables and fluxes from atmospheric data. As a result, the model accurately represents the heterogeneities in space and time of the various hydrological processes. Reference Beldring, S. 2008. Distributed element water balance model system. Norwegian Water Resources and Energy Directorate, Report no. 4/2008, 40 pp
A long-term/short-term model for daily electricity prices with dynamic volatility
International Nuclear Information System (INIS)
In this paper we introduce a new stochastic long-term/short-term model for short-term electricity prices, and apply it to four major European indices, namely to the German, Dutch, UK and Nordic one. We give evidence that all time series contain certain periodic (mostly annual) patterns, and show how to use the wavelet transform, a tool of multiresolution analysis, for filtering purpose. The wavelet transform is also applied to separate the long-term trend from the short-term oscillation in the seasonal-adjusted log-prices. In all time series we find evidence for dynamic volatility, which we incorporate by using a bivariate GARCH model with constant correlation. Eventually we fit various models from the existing literature to the data, and come to the conclusion that our approach performs best. For the error distribution, the Normal Inverse Gaussian distribution shows the best fit. (author)
Term Structure Models with Parallel and Proportional Shifts
DEFF Research Database (Denmark)
Armerin, Frederik; Björk, Tomas; Astrup Jensen, Bjarne
We investigate the possibility of an arbitrage free model for the term structure of interest rates where the yield curve only changes through a parallel shift. We consider HJM type forward rate models driven by a multidimensionalWiener process as well as by a general marked point process. Within...... this general framework we show that there does indeed exist a large variety of nontrivial parallel shift term structure models, and we also describe these in detail. We also show that there exists no nontrivial flat term structure model. The same analysis is repeated for the similar case, where the yield curve...... only changes through proportional shifts.Key words: bond market, term structure of interest rates, flat term structures....
International Nuclear Information System (INIS)
Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin
2011-01-01
We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...
A phenomenological memristor model for short-term/long-term memory
Energy Technology Data Exchange (ETDEWEB)
Chen, Ling, E-mail: 2006chenling2006@163.com [College of Computer Science, Chongqing University, Chongqing 400044 (China); Li, Chuandong, E-mail: licd@cqu.edu.cn [College of Computer Science, Chongqing University, Chongqing 400044 (China); Huang, Tingwen [Texas A and M University at Qatar, Doha, B.O. Box 23874 (Qatar); Ahmad, Hafiz Gulfam [College of Computer Science, Chongqing University, Chongqing 400044 (China); Chen, Yiran [Electrical and Computer Engineering, University of Pittsburgh, PA 15261 (United States)
2014-08-14
Memristor is considered to be a natural electrical synapse because of its distinct memory property and nanoscale. In recent years, more and more similar behaviors are observed between memristors and biological synapse, e.g., short-term memory (STM) and long-term memory (LTM). The traditional mathematical models are unable to capture the new emerging behaviors. In this article, an updated phenomenological model based on the model of the Hewlett–Packard (HP) Labs has been proposed to capture such new behaviors. The new dynamical memristor model with an improved ion diffusion term can emulate the synapse behavior with forgetting effect, and exhibit the transformation between the STM and the LTM. Further, this model can be used in building new type of neural networks with forgetting ability like biological systems, and it is verified by our experiment with Hopfield neural network. - Highlights: • We take the Fick diffusion and the Soret diffusion into account in the ion drift theory. • We develop a new model based on the old HP model. • The new model can describe the forgetting effect and the spike-rate-dependent property of memristor. • The new model can solve the boundary effect of all window functions discussed in [13]. • A new Hopfield neural network with the forgetting ability is built by the new memristor model.
A phenomenological memristor model for short-term/long-term memory
International Nuclear Information System (INIS)
Chen, Ling; Li, Chuandong; Huang, Tingwen; Ahmad, Hafiz Gulfam; Chen, Yiran
2014-01-01
Memristor is considered to be a natural electrical synapse because of its distinct memory property and nanoscale. In recent years, more and more similar behaviors are observed between memristors and biological synapse, e.g., short-term memory (STM) and long-term memory (LTM). The traditional mathematical models are unable to capture the new emerging behaviors. In this article, an updated phenomenological model based on the model of the Hewlett–Packard (HP) Labs has been proposed to capture such new behaviors. The new dynamical memristor model with an improved ion diffusion term can emulate the synapse behavior with forgetting effect, and exhibit the transformation between the STM and the LTM. Further, this model can be used in building new type of neural networks with forgetting ability like biological systems, and it is verified by our experiment with Hopfield neural network. - Highlights: • We take the Fick diffusion and the Soret diffusion into account in the ion drift theory. • We develop a new model based on the old HP model. • The new model can describe the forgetting effect and the spike-rate-dependent property of memristor. • The new model can solve the boundary effect of all window functions discussed in [13]. • A new Hopfield neural network with the forgetting ability is built by the new memristor model
Energy Technology Data Exchange (ETDEWEB)
Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)
2017-04-01
We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.
Directory of Open Access Journals (Sweden)
Wuttinan Nunkaew
2013-12-01
Full Text Available At present, methods for solving the manufacturing cell formation with the assignment of duplicated machines contain many steps. Firstly, part families and machine cells are determined. Then, the incidence matrix of the cell formation is reconsidered for machine duplications to reduce the interaction between cells with the restriction of cost. These ways are difficult and complicated. Besides, consideration of machine setup cost should be done simultaneously with the decision making. In this paper, an effective lexicographic fuzzy multi–objective optimization model for manufacturing cell formation with a setup cost constrained of machine duplication is presented. Based on the perfect grouping concept, two crucial performance measures called exceptional elements and void elements are utilized in the proposed model. Lexicographic fuzzy goal programming is applied to solve this multi–objective model with setup cost constraint. So, the decision maker can easily solve the manufacturing cell formation and control the setup cost of machine duplication, simultaneously.
Library Support for Resource Constrained Accelerators
DEFF Research Database (Denmark)
Brock-Nannestad, Laust; Karlsson, Sven
2014-01-01
Accelerators, and other resource constrained systems, are increasingly being used in computer systems. Accelerators provide power efficient performance and often provide a shared memory model. However, it is a challenge to map feature rich APIs, such as OpenMP, to resource constrained systems...
Bayesian evaluation of inequality constrained hypotheses
Gu, X.; Mulder, J.; Deković, M.; Hoijtink, H.
2014-01-01
Bayesian evaluation of inequality constrained hypotheses enables researchers to investigate their expectations with respect to the structure among model parameters. This article proposes an approximate Bayes procedure that can be used for the selection of the best of a set of inequality constrained
Schull, M. A.
2015-03-11
Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.
Toward Standardizing a Lexicon of Infectious Disease Modeling Terms.
Milwid, Rachael; Steriu, Andreea; Arino, Julien; Heffernan, Jane; Hyder, Ayaz; Schanzer, Dena; Gardner, Emma; Haworth-Brockman, Margaret; Isfeld-Kiely, Harpa; Langley, Joanne M; Moghadas, Seyed M
2016-01-01
Disease modeling is increasingly being used to evaluate the effect of health intervention strategies, particularly for infectious diseases. However, the utility and application of such models are hampered by the inconsistent use of infectious disease modeling terms between and within disciplines. We sought to standardize the lexicon of infectious disease modeling terms and develop a glossary of terms commonly used in describing models' assumptions, parameters, variables, and outcomes. We combined a comprehensive literature review of relevant terms with an online forum discussion in a virtual community of practice, mod4PH (Modeling for Public Health). Using a convergent discussion process and consensus amongst the members of mod4PH, a glossary of terms was developed as an online resource. We anticipate that the glossary will improve inter- and intradisciplinary communication and will result in a greater uptake and understanding of disease modeling outcomes in heath policy decision-making. We highlight the role of the mod4PH community of practice and the methodologies used in this endeavor to link theory, policy, and practice in the public health domain.
2013-07-03
of 2D random velocity heterogeneities in the mantle lid and Moho topography on Pn geometric spreading, Bull. Seism . Soc. Am., 101, pp. 126-140...Bottone, S., M. D. Fisk, and G. D. McCartor (2002), Regional seismic event characterization using a Bayesian formulation of simple kriging, Bull. Seism ...4997-5009. Fisk, M. D., H. L. Gray, and G. D. McCartor (1996), Regional discrimination without transporting thresholds, Bull. Seism . Soc. Am., 86
Model for expressing leaf photosynthesis in terms of weather variables
African Journals Online (AJOL)
A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...
Simple model for crop photosynthesis in terms of weather variables ...
African Journals Online (AJOL)
A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.
IDENTIFICATION OF SYSTEMS IN TERMS OF THE WIENER MODEL
The report presents briefly a nonlinear model originally proposed by the late Norbert Wiener for the characterization of general systems. Three...procedures are then offered for the identification of any given system in terms of the Wiener model. Finally, this report presents the results of a digital
A Team Building Model for Software Engineering Courses Term Projects
Sahin, Yasar Guneri
2011-01-01
This paper proposes a new model for team building, which enables teachers to build coherent teams rapidly and fairly for the term projects of software engineering courses. Moreover, the model can also be used to build teams for any type of project, if the team member candidates are students, or if they are inexperienced on a certain subject. The…
A Polynomial Term Structure Model with Macroeconomic Variables
Directory of Open Access Journals (Sweden)
José Valentim Vicente
2007-06-01
Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.
The Starobinsky model from superconformal D-term inflation
Energy Technology Data Exchange (ETDEWEB)
Buchmuller, W.; Domcke, V.; Kamada, K.
2013-06-15
We point out that in the large field regime, the recently proposed superconformal D-term inflation model coincides with the Starobinsky model. In this regime, the inflaton field dominates over the Planck mass in the gravitational kinetic term in the Jordan frame. Slow-roll inflation is realized in the large field regime for sufficiently large gauge couplings. The Starobinsky model generally emerges as an effective description of slow-roll inflation if a Jordan frame exists where, for large inflaton field values, the action is scale invariant and the ratio {lambda} of the inflaton self-coupling and the nonminimal coupling to gravity is tiny. The interpretation of this effective coupling is different in different models. In superconformal D-term inflation it is determined by the scale of grand unification, {lambda}{proportional_to}({Lambda}{sub GUT}/M{sub P}){sup 4}.
Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing
2014-01-01
Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.
Power-constrained supercomputing
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound
Constraining dark sectors with monojets and dijets
International Nuclear Information System (INIS)
Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai; McCullough, Matthew
2015-03-01
We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.
International Nuclear Information System (INIS)
Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen
2015-01-01
Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A
Models with oscillator terms in noncommutative quantum field theory
International Nuclear Information System (INIS)
Kronberger, E.
2010-01-01
The main focus of this Ph.D. thesis is on noncommutative models involving oscillator terms in the action. The first one historically is the successful Grosse-Wulkenhaar (G.W.) model which has already been proven to be renormalizable to all orders of perturbation theory. Remarkably it is furthermore capable of solving the Landau ghost problem. In a first step, we have generalized the G.W. model to gauge theories in a very straightforward way, where the action is BRS invariant and exhibits the good damping properties of the scalar theory by using the same propagator, the so-called Mehler kernel. To be able to handle some more involved one-loop graphs we have programmed a powerful Mathematica package, which is capable of analytically computing Feynman graphs with many terms. The result of those investigations is that new terms originally not present in the action arise, which led us to the conclusion that we should better start from a theory where those terms are already built in. Fortunately there is an action containing this complete set of terms. It can be obtained by coupling a gauge field to the scalar field of the G.W. model, integrating out the latter, and thus 'inducing' a gauge theory. Hence the model is called Induced Gauge Theory. Despite the advantage that it is by construction completely gauge invariant, it contains also some unphysical terms linear in the gauge field. Advantageously we could get rid of these terms using a special gauge dedicated to this purpose. Within this gauge we could again establish the Mehler kernel as gauge field propagator. Furthermore we where able to calculate the ghost propagator, which turned out to be very involved. Thus we were able to start with the first few loop computations showing the expected behavior. The next step is to show renormalizability of the model, where some hints towards this direction will also be given. (author) [de
Evolutionary constrained optimization
Deb, Kalyanmoy
2015-01-01
This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...
Two empirical models for short-term forecast of Kp
Luo, B.; Liu, S.; Gong, J.
2017-03-01
In this paper, two empirical models are developed for short-term forecast of the Kp index, taking advantage of solar wind-magnetosphere coupling functions proposed by the research community. Both models are based on the data for years 1995 to 2004. Model 1 mainly uses solar wind parameters as the inputs, while model 2 also utilizes the previous measured Kp value. Finally, model 1 predicts Kp with a linear correlation coefficient (r) of 0.91, a prediction efficiency (PE) of 0.81, and a root-mean-square (RMS) error of 0.59. Model 2 gives an r of 0.92, a PE of 0.84, and an RMS error of 0.57. The two models are validated through out-of-sample test for years 2005 to 2013, which also yields high forecast accuracy. Unlike in the other models reported in the literature, we are taking the response time of the magnetosphere to external solar wind at the Earth explicitly in the modeling. Statistically, the time delay in the models turns out to be about 30 min. By introducing this term, both the accuracy and lead time of the model forecast are improved. Through verification and validation, the models can be used in operational geomagnetic storm warnings with reliable performance.
Modeling long-term dynamics of electricity markets
International Nuclear Information System (INIS)
Olsina, Fernando; Garces, Francisco; Haubrich, H.-J.
2006-01-01
In the last decade, many countries have restructured their electricity industries by introducing competition in their power generation sectors. Although some restructuring has been regarded as successful, the short experience accumulated with liberalized power markets does not allow making any founded assertion about their long-term behavior. Long-term prices and long-term supply reliability are now center of interest. This concerns firms considering investments in generation capacity and regulatory authorities interested in assuring the long-term supply adequacy and the stability of power markets. In order to gain significant insight into the long-term behavior of liberalized power markets, in this paper, a simulation model based on system dynamics is proposed and the underlying mathematical formulations extensively discussed. Unlike classical market models based on the assumption that market outcomes replicate the results of a centrally made optimization, the approach presented here focuses on replicating the system structure of power markets and the logic of relationships among system components in order to derive its dynamical response. The simulations suggest that there might be serious problems to adjust early enough the generation capacity necessary to maintain stable reserve margins, and consequently, stable long-term price levels. Because of feedback loops embedded in the structure of power markets and the existence of some time lags, the long-term market development might exhibit a quite volatile behavior. By varying some exogenous inputs, a sensitivity analysis is carried out to assess the influence of these factors on the long-run market dynamics
DEFF Research Database (Denmark)
Artuso, Matteo; Christiansen, Henrik Lehrmann
2014-01-01
Inter-cell interference (ICI) is considered as the most critical bottleneck to ubiquitous 4th generation cellular access in the mobile long term evolution (LTE). To address the problem, several solutions are under evaluation as part of LTE-Advanced (LTE-A), the most promising one being coordinated...
Short-Termed Integrated Forecasting System: 1993 Model documentation report
Energy Technology Data Exchange (ETDEWEB)
1993-05-01
The purpose of this report is to define the Short-Term Integrated Forecasting System (STIFS) and describe its basic properties. The Energy Information Administration (EIA) of the US Energy Department (DOE) developed the STIFS model to generate short-term (up to 8 quarters), monthly forecasts of US supplies, demands, imports exports, stocks, and prices of various forms of energy. The models that constitute STIFS generate forecasts for a wide range of possible scenarios, including the following ones done routinely on a quarterly basis: A base (mid) world oil price and medium economic growth. A low world oil price and high economic growth. A high world oil price and low economic growth. This report is written for persons who want to know how short-term energy markets forecasts are produced by EIA. The report is intended as a reference document for model analysts, users, and the public.
Directory of Open Access Journals (Sweden)
T. Houska
2017-07-01
Full Text Available This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha−1 a−1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha−1 a−1 for the arable land, grassland and forest sites, respectively. An innovative model–data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model–data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze–thaw cycling or incomplete model processes (e.g. respiration rates after harvest. This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months. Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.
Directory of Open Access Journals (Sweden)
Mark C Vanderwel
Full Text Available Mechanistic modelling approaches that explicitly translate from individual-scale resource selection to the distribution and abundance of a larger population may be better suited to predicting responses to spatially heterogeneous habitat alteration than commonly-used regression models. We developed an individual-based model of home range establishment that, given a mapped distribution of local habitat values, estimates species abundance by simulating the number and position of viable home ranges that can be maintained across a spatially heterogeneous area. We estimated parameters for this model from data on red-backed vole (Myodes gapperi abundances in 31 boreal forest sites in Ontario, Canada. The home range model had considerably more support from these data than both non-spatial regression models based on the same original habitat variables and a mean-abundance null model. It had nearly equivalent support to a non-spatial regression model that, like the home range model, scaled an aggregate measure of habitat value from local associations with habitat resources. The home range and habitat-value regression models gave similar predictions for vole abundance under simulations of light- and moderate-intensity partial forest harvesting, but the home range model predicted lower abundances than the regression model under high-intensity disturbance. Empirical regression-based approaches for predicting species abundance may overlook processes that affect habitat use by individuals, and often extrapolate poorly to novel habitat conditions. Mechanistic home range models that can be parameterized against abundance data from different habitats permit appropriate scaling from individual- to population-level habitat relationships, and can potentially provide better insights into responses to disturbance.
Short-term forecasting model for aggregated regional hydropower generation
International Nuclear Information System (INIS)
Monteiro, Claudio; Ramirez-Rosado, Ignacio J.; Fernandez-Jimenez, L. Alfredo
2014-01-01
Highlights: • Original short-term forecasting model for the hourly hydropower generation. • The use of NWP forecasts allows horizons of several days. • New variable to represent the capacity level for generating hydroelectric energy. • The proposed model significantly outperforms the persistence model. - Abstract: This paper presents an original short-term forecasting model of the hourly electric power production for aggregated regional hydropower generation. The inputs of the model are previously recorded values of the aggregated hourly production of hydropower plants and hourly water precipitation forecasts using Numerical Weather Prediction tools, as well as other hourly data (load demand and wind generation). This model is composed of three modules: the first one gives the prediction of the “monthly” hourly power production of the hydropower plants; the second module gives the prediction of hourly power deviation values, which are added to that obtained by the first module to achieve the final forecast of the hourly hydropower generation; the third module allows a periodic adjustment of the prediction of the first module to improve its BIAS error. The model has been applied successfully to the real-life case study of the short-term forecasting of the aggregated hydropower generation in Spain and Portugal (Iberian Peninsula Power System), achieving satisfactory results for the next-day forecasts. The model can be valuable for agents involved in electricity markets and useful for power system operations
Argus, Donald F.; Peltier, W. Richard
2010-05-01
Using global positioning system, very long baseline interferometry, satellite laser ranging and Doppler Orbitography and Radiopositioning Integrated by Satellite observations, including the Canadian Base Network and Fennoscandian BIFROST array, we constrain, in models of postglacial rebound, the thickness of the ice sheets as a function of position and time and the viscosity of the mantle as a function of depth. We test model ICE-5G VM2 T90 Rot, which well fits many hundred Holocene relative sea level histories in North America, Europe and worldwide. ICE-5G is the deglaciation history having more ice in western Canada than ICE-4G; VM2 is the mantle viscosity profile having a mean upper mantle viscosity of 0.5 × 1021Pas and a mean uppermost-lower mantle viscosity of 1.6 × 1021Pas T90 is an elastic lithosphere thickness of 90 km; and Rot designates that the model includes (rotational feedback) Earth's response to the wander of the North Pole of Earth's spin axis towards Canada at a speed of ~1° Myr-1. The vertical observations in North America show that, relative to ICE-5G, the Laurentide ice sheet at last glacial maximum (LGM) at ~26 ka was (1) much thinner in southern Manitoba, (2) thinner near Yellowknife (Northwest Territories), (3) thicker in eastern and southern Quebec and (4) thicker along the northern British Columbia-Alberta border, or that ice was unloaded from these areas later (thicker) or earlier (thinner) than in ICE-5G. The data indicate that the western Laurentide ice sheet was intermediate in mass between ICE-5G and ICE-4G. The vertical observations and GRACE gravity data together suggest that the western Laurentide ice sheet was nearly as massive as that in ICE-5G but distributed more broadly across northwestern Canada. VM2 poorly fits the horizontal observations in North America, predicting places along the margins of the Laurentide ice sheet to be moving laterally away from the ice centre at 2 mm yr-1 in ICE-4G and 3 mm yr-1 in ICE-5G, in
Towards The Long-Term Preservation of Building Information Models
DEFF Research Database (Denmark)
Beetz, Jacob; Dietze, Stefan; Berndt, René
2013-01-01
primarily been on textual and audio-visual media types. With the recent paradigm shift in architecture and construction from analog 2D plans and scale models to digital 3D information models of buildings, long-term preservation efforts must turn their attention to this new type of data. Currently......, no existing approach is able to provide a secure and efficient long-term preservation solution covering the broad spectrum of 3D architectural data, while at the same time taking into account the demands of institutional collectors like architecture libraries and archives as well as those of the private...
Zhang, Y.; Amelug, F.; Albino, F.; Aoki, Y.
2015-12-01
Kirishima volcano group is located on the southern part of Kyushu, Japan. Prior to its 2011 magmatic eruption, several phreatic eruptions occurred on August 2008 and March, April, May, June and July 2010. Using time series InSAR from 2007-2011 ALOS PALSAR data, we found a small deflating area around Shinmoe-dake volcano summit and a relatively large inflating area on the northwest of Shinmoe-dake volcano, before 2011 Kirishima, Shinmoe-dake eruption. Modeling constrained from both InSAR and dense regional GPS network data revealed deep and shallow deformation sources for Kirishima volcano. This double sources system would help us better understand the complex process of volcanic deformation.
Accurate discharge simulation is one of the most common objectives of hydrological modeling studies. However, a good simulation of discharge is not necessarily the result of a realistic simulation of hydrological processes within the catchment. To enhance the realism of model results, we propose an ...
Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relat...
Constrained vertebrate evolution by pleiotropic genes
DEFF Research Database (Denmark)
Hu, Haiyang; Uesaka, Masahiro; Guo, Song
2017-01-01
Despite morphological diversification of chordates over 550 million years of evolution, their shared basic anatomical pattern (or 'bodyplan') remains conserved by unknown mechanisms. The developmental hourglass model attributes this to phylum-wide conserved, constrained organogenesis stages that ...
Murine model of long-term obstructive jaundice.
Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki
2016-11-01
With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of three murine models of obstructive jaundice. C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Overall, 70% (7 of 10) of tCL mice died by day 7, whereas majority 67% (10 of 15) of pCL mice survived with loss of jaundice. A total of 19% (3 of 16) of LMHL mice died; however, jaundice continued beyond day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 d after ligation but jaundice rapidly decreased by day 7. The LHML group developed portal hypertension and severe fibrosis by day 14 in addition to prolonged jaundice. The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice, but long-term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. Copyright © 2016 Elsevier Inc. All rights reserved.
Modelling the short term herding behaviour of stock markets
International Nuclear Information System (INIS)
Shapira, Yoash; Berman, Yonatan; Ben-Jacob, Eshel
2014-01-01
Modelling the behaviour of stock markets has been of major interest in the past century. The market can be treated as a network of many investors reacting in accordance to their group behaviour, as manifested by the index and effected by the flow of external information into the system. Here we devise a model that encapsulates the behaviour of stock markets. The model consists of two terms, demonstrating quantitatively the effect of the individual tendency to follow the group and the effect of the individual reaction to the available information. Using the above factors we were able to explain several key features of the stock market: the high correlations between the individual stocks and the index; the Epps effect; the high fluctuating nature of the market, which is similar to real market behaviour. Furthermore, intricate long term phenomena are also described by this model, such as bursts of synchronized average correlation and the dominance of the index as demonstrated through partial correlation. (paper)
Selection of models to calculate the LLW source term
International Nuclear Information System (INIS)
Sullivan, T.M.
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab
Constraining walking and custodial technicolor
DEFF Research Database (Denmark)
Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco
2008-01-01
We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...... cannot emerge from walking-type dynamics. We suggest that it is possible to have a very light spin-one axial (vector) boson. However, in the walking dynamics the associated vector boson is heavy while it is degenerate with the axial in custodial technicolor Udgivelsesdato: 19 May...
Multivariate Term Structure Models with Level and Heteroskedasticity Effects
DEFF Research Database (Denmark)
Christiansen, Charlotte
2005-01-01
The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the ãth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate GA...... and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates. Udgivelsesdato: MAY...
A Long-Term Mathematical Model for Mining Industries
Achdou , Yves; Giraud , Pierre-Noel; Lasry , Jean-Michel; Lions , Pierre-Louis
2016-01-01
International audience; A parcimonious long term model is proposed for a mining industry. Knowing the dynamics of the global reserve, the strategy of each production unit consists of an optimal control problem with two controls, first the flux invested into prospection and the building of new extraction facilities, second the production rate. In turn, the dynamics of the global reserve depends on the individual strategies of the producers, so the models leads to an equilibrium, which is descr...
Török, Gabriel; Goluchová, Kateřina; Urbanec, Martin; Šrámková, Eva; Adámek, Karel; Urbancová, Gabriela; Pecháček, Tomáš; Bakala, Pavel; Stuchlík, Zdeněk; Horák, Jiří; Juryšek, Jakub
2016-12-01
Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass-angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle-Thorne spacetimes. We show that the application of concrete EoS together with a particular QPO model yields a specific mass-angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set of EoS and discuss their compatibility with the considered QPO models. We conclude that when the NS spin frequency in 4U 1636-53 is close to 580 Hz, we can exclude 51 of the 90 considered combinations of EoS and QPO models. We also discuss additional restrictions that may exclude even more combinations. Namely, 13 EOS are compatible with the observed twin-peak QPOs and the relativistic precession model. However, when considering the low-frequency QPOs and Lense-Thirring precession, only 5 EOS are compatible with the model.
Golder, K.; Burr, D. M.; Tran, L.
2017-12-01
Regional volcanic processes shaped many planetary surfaces in the Solar System, often through the emplacement of long, voluminous lava flows. Terrestrial examples of this type of lava flow have been used as analogues for extensive martian flows, including those within the circum-Cerberus outflow channels. This analogy is based on similarities in morphology, extent, and inferred eruptive style between terrestrial and martian flows, which raises the question of how these lava flows appear comparable in size and morphology on different planets. The parameters that influence the areal extent of silicate lavas during emplacement may be categorized as either inherent or external to the lava. The inherent parameters include the lava yield strength, density, composition, water content, crystallinity, exsolved gas content, pressure, and temperature. Each inherent parameter affects the overall viscosity of the lava, and for this work can be considered a subset of the viscosity parameter. External parameters include the effusion rate, total erupted volume, regional slope, and gravity. To investigate which parameter(s) may control(s) the development of long lava flows on Mars, we are applying a computational numerical-modelling to reproduce the observed lava flow morphologies. Using a matrix of boundary conditions in the model enables us to investigate the possible range of emplacement conditions that can yield the observed morphologies. We have constructed the basic model framework in Model Builder within ArcMap, including all governing equations and parameters that we seek to test, and initial implementation and calibration has been performed. The base model is currently capable of generating a lava flow that propagates along a pathway governed by the local topography. At AGU, the results of model calibration using the Eldgá and Laki lava flows in Iceland will be presented, along with the application of the model to lava flows within the Cerberus plains on Mars. We then
Directory of Open Access Journals (Sweden)
C. Kelleher
2017-07-01
Full Text Available Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow. In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology–soil–vegetation model (DHSVM. As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Kim, Hojeong; Jones, Kelvin E.
2012-01-01
Our goal was to investigate how the propagation of alternating signals (i.e. AC), like action potentials, into the dendrites influenced nonlinear firing behaviour of motor neurons using a systematically reduced neuron model. A recently developed reduced modeling approach using only steady-current (i.e. DC) signaling was analytically expanded to retain features of the frequency-response analysis carried out in multicompartment anatomically reconstructed models. Bifurcation analysis of the extended model showed that the typically overlooked parameter of AC amplitude attenuation was positively correlated with the current threshold for the activation of a plateau potential in the dendrite. Within the multiparameter space map of the reduced model the region demonstrating “fully-bistable” firing was bounded by directional DC attenuation values that were negatively correlated to AC attenuation. Based on these results we conclude that analytically derived reduced models of dendritic trees should be fit on DC and AC signaling, as both are important biophysical parameters governing the nonlinear firing behaviour of motor neurons. PMID:22916290
Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer
2018-03-24
Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state, or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate; this approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one, or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in
Short-term integrated forecasting system : 1993 model documentation report
1993-12-01
The purpose of this report is to define the Short-Term Integrated Forecasting System (STIFS) and describe its basic properties. The Energy Information Administration (EIA) of the U.S. Energy Department (DOE) developed the STIFS model to generate shor...
Risk factors and prognostic models for perinatal asphyxia at term
Ensing, S.
2015-01-01
This thesis will focus on the risk factors and prognostic models for adverse perinatal outcome at term, with a special focus on perinatal asphyxia and obstetric interventions during labor to reduce adverse pregnancy outcomes. For the majority of the studies in this thesis we were allowed to use data
Viscous cosmological models with a variable cosmological term ...
African Journals Online (AJOL)
Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...
Gharamti, M. E.
2014-03-01
Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system\\'s processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
A model for Long-term Industrial Energy Forecasting (LIEF)
Energy Technology Data Exchange (ETDEWEB)
Ross, M. [Lawrence Berkeley Lab., CA (United States)]|[Michigan Univ., Ann Arbor, MI (United States). Dept. of Physics]|[Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Hwang, R. [Lawrence Berkeley Lab., CA (United States)
1992-02-01
The purpose of this report is to establish the content and structural validity of the Long-term Industrial Energy Forecasting (LIEF) model, and to provide estimates for the model`s parameters. The model is intended to provide decision makers with a relatively simple, yet credible tool to forecast the impacts of policies which affect long-term energy demand in the manufacturing sector. Particular strengths of this model are its relative simplicity which facilitates both ease of use and understanding of results, and the inclusion of relevant causal relationships which provide useful policy handles. The modeling approach of LIEF is intermediate between top-down econometric modeling and bottom-up technology models. It relies on the following simple concept, that trends in aggregate energy demand are dependent upon the factors: (1) trends in total production; (2) sectoral or structural shift, that is, changes in the mix of industrial output from energy-intensive to energy non-intensive sectors; and (3) changes in real energy intensity due to technical change and energy-price effects as measured by the amount of energy used per unit of manufacturing output (KBtu per constant $ of output). The manufacturing sector is first disaggregated according to their historic output growth rates, energy intensities and recycling opportunities. Exogenous, macroeconomic forecasts of individual subsector growth rates and energy prices can then be combined with endogenous forecasts of real energy intensity trends to yield forecasts of overall energy demand. 75 refs.
Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J
2015-12-01
Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Jun-Jie Wei
2015-01-01
Full Text Available We update gamma-ray burst (GRB luminosity relations among certain spectral and light-curve features with 139 GRBs. The distance modulus of 82 GRBs at z>1.4 can be calibrated with the sample at z≤1.4 by using the cubic spline interpolation method from the Union2.1 Type Ia supernovae (SNe Ia set. We investigate the joint constraints on the Cardassian expansion model and dark energy with 580 Union2.1 SNe Ia sample (z<1.4 and 82 calibrated GRBs’ data (1.4
Czech Academy of Sciences Publication Activity Database
Collett, S.; Štípská, P.; Kusbach, Vladimír; Schulmann, K.; Marciniak, G.
2017-01-01
Roč. 35, č. 3 (2017), s. 253-280 ISSN 0263-4929 Institutional support: RVO:67985530 Keywords : eclogite * Bohemian Massif * thermodynamic modelling * micro-fabric analysis * subduction and exhumation dynamics Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 3.594, year: 2016
Baloković, M.; Brightman, M.; Harrison, F. A.; Comastri, A.; Ricci, C.; Buchner, J.; Gandhi, P.; Farrah, D.; Stern, D.
2018-02-01
The basic unified model of active galactic nuclei (AGNs) invokes an anisotropic obscuring structure, usually referred to as a torus, to explain AGN obscuration as an angle-dependent effect. We present a new grid of X-ray spectral templates based on radiative transfer calculations in neutral gas in an approximately toroidal geometry, appropriate for CCD-resolution X-ray spectra (FWHM ≥ 130 eV). Fitting the templates to broadband X-ray spectra of AGNs provides constraints on two important geometrical parameters of the gas distribution around the supermassive black hole: the average column density and the covering factor. Compared to the currently available spectral templates, our model is more flexible, and capable of providing constraints on the main torus parameters in a wider range of AGNs. We demonstrate the application of this model using hard X-ray spectra from NuSTAR (3–79 keV) for four AGNs covering a variety of classifications: 3C 390.3, NGC 2110, IC 5063, and NGC 7582. This small set of examples was chosen to illustrate the range of possible torus configurations, from disk-like to sphere-like geometries with column densities below, as well as above, the Compton-thick threshold. This diversity of torus properties challenges the simple assumption of a standard geometrically and optically thick toroidal structure commonly invoked in the basic form of the unified model of AGNs. Finding broad consistency between our constraints and those from infrared modeling, we discuss how the approach from the X-ray band complements similar measurements of AGN structures at other wavelengths.
A Parametric Factor Model of the Term Structure of Mortality
DEFF Research Database (Denmark)
Haldrup, Niels; Rosenskjold, Carsten Paysen T.
The prototypical Lee-Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via...... parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the "accident hump" that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions...... on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson-Siegel term structure model. First, a two-step nonlinear least squares...
Postglacial Rebound Model ICE-6G_C (VM5a) Constrained by Geodetic and Geologic Observations
Peltier, W. R.; Argus, D. F.; Drummond, R.
2014-12-01
We fit the revised global model of glacial isostatic adjustment ICE-6G_C (VM5a) to all available data, consisting of several hundred GPS uplift rates, a similar number of 14C dated relative sea level histories, and 62 geologic estimates of changes in Antarctic ice thickness. The mantle viscosity profile, VM5a is a simple multi-layer fit to prior model VM2 of Peltier (1996, Science). However, the revised deglaciation history, ICE-6G (VM5a), differs significantly from previous models in the Toronto series. (1) In North America, GPS observations of vertical uplift of Earth's surface from the Canadian Base Network require the thickness of the Laurentide ice sheet at Last Glacial Maximum to be significantly revised. At Last Glacial Maximum the new model ICE-6G_C in this region, relative to ICE-5G, roughly 50 percent thicker east of Hudson Bay (in and northern Quebec and Labrador region) and roughly 30 percent thinner west of Hudson Bay (in Manitoba, Saskatchewan, and the Northwest Territories).the net change in mass, however, is small. We find that rates of gravity change determined by GRACE when corrected for the predictions of ICE-6G_C (VM5a) are significantly smaller than residuals determined on the basis of earlier models. (2) In Antarctica, we fit GPS uplift rates, geologic estimates of changes in ice thickness, and geologic constraints on the timing of ice loss. The resulting deglaciation history also differs significantly from prior models. The contribution of Antarctic ice loss to global sea level rise since Last Glacial Maximum in ICE-6G_C is 13.6 meters, less than in ICE-5G (17.5 m), but significantly larger than in both the W12A model of Whitehouse et al. [2012] (8 m) and the IJ05 R02 model of Ivins et al. [2013] (7.5 m). In ICE-6G_C rapid ice loss occurs in Antarctica from 11.5 to 8 thousands years ago, with a rapid onset at 11.5 ka thereby contributing significantly to Meltwater Pulse 1B. In ICE-6G_C (VM5a), viscous uplift of Antarctica is increasing
A model for Long-term Industrial Energy Forecasting (LIEF)
Energy Technology Data Exchange (ETDEWEB)
Ross, M. (Lawrence Berkeley Lab., CA (United States) Michigan Univ., Ann Arbor, MI (United States). Dept. of Physics Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.); Hwang, R. (Lawrence Berkeley Lab., CA (United States))
1992-02-01
The purpose of this report is to establish the content and structural validity of the Long-term Industrial Energy Forecasting (LIEF) model, and to provide estimates for the model's parameters. The model is intended to provide decision makers with a relatively simple, yet credible tool to forecast the impacts of policies which affect long-term energy demand in the manufacturing sector. Particular strengths of this model are its relative simplicity which facilitates both ease of use and understanding of results, and the inclusion of relevant causal relationships which provide useful policy handles. The modeling approach of LIEF is intermediate between top-down econometric modeling and bottom-up technology models. It relies on the following simple concept, that trends in aggregate energy demand are dependent upon the factors: (1) trends in total production; (2) sectoral or structural shift, that is, changes in the mix of industrial output from energy-intensive to energy non-intensive sectors; and (3) changes in real energy intensity due to technical change and energy-price effects as measured by the amount of energy used per unit of manufacturing output (KBtu per constant $ of output). The manufacturing sector is first disaggregated according to their historic output growth rates, energy intensities and recycling opportunities. Exogenous, macroeconomic forecasts of individual subsector growth rates and energy prices can then be combined with endogenous forecasts of real energy intensity trends to yield forecasts of overall energy demand. 75 refs.
White, M. A.; Baldocchi, D. D.; Schwartz, M. D.
2007-12-01
Shifts in the timing and distribution of spring phenological events are a central feature of global change research. Most evidence, especially for multi-decade records, indicates a shift towards earlier spring but with frequent differences in the magnitude and location of trends. Here, using two phenology models, one based on first bloom dates of clonal honeysuckle and lilac and one based on initiation of net carbon uptake at eddy covariance flux towers, we upscaled observations of spring arrival to the conterminous US at 1km resolution. The models shared similar and coherent spatial and temporal patterns at large regional scales but differed at smaller scales, likely attributable to: use of cloned versus extant species; chilling requirements; model complexity; and biome characteristics. Our results constrain climatically driven shifts in 1981 to 2003 spring arrival for the conterminous US to between -2.7 and 0.1 day/23 years. Estimated trend differences were minor in the biome of model development (deciduous broad leaf forest) but diverged strongly in woody evergreen and grassland areas. Based on comparisons with the normalized difference vegetation index (NDVI) and a limited independent ground dataset, predictions from both models were consistent with observations of satellite-based greenness and measured leaf expansion. First bloom trends, which were mostly statistically insignificant, were also consistent with NDVI trends while the net carbon uptake model predicted extensive trends towards earlier spring in the western US that were not observed in the NDVI data, showing the implication of model application outside the biome range of initial development.
Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen
2015-08-15
Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Berger, J. A.; Schmidt, M. E.; Izawa, M. R. M.; Gellert, R.; Ming, D. W.; Rampe, E. B.; VanBommel, S. J.; McAdam, A. C.
2016-01-01
The Mars rover Curiosity has encountered silica-enriched bedrock (as strata and as veins and associated halos of alteration) in the largely basaltic Murray Fm. of Mt. Sharp in Gale Crater. Alpha Particle X-ray Spectrometer (APXS) investigations of the Murray Fm. revealed decreasing Mg, Ca, Mn, Fe, and Al, and higher S, as silica increased (Fig. 1). A positive correlation between SiO2 and TiO2 (up to 74.4 and 1.7 wt %, respectively) suggests that these two insoluble elements were retained while acidic fluids leached more soluble elements. Other evidence also supports a silica-retaining, acidic alteration model for the Murray Fm., including low trace element abundances consistent with leaching, and the presence of opaline silica and jarosite determined by CheMin. Phosphate stability is a key component of this model because PO4 3- is typically soluble in acidic water and is likely a mobile ion in diagenetic fluids (pH less than 5). However, the Murray rocks are not leached of P; they have variable P2O5 (Fig. 1) ranging from average Mars (0.9 wt%) up to the highest values in Gale Crater (2.5 wt%). Here we evaluate APXS measurements of Murray Fm. bedrock and veins with respect to phosphate stability in acidic fluids as a test of the acidic alteration model for the Lower Mt. Sharp rocks.
Samset, B. H.; Myhre, G.; Herber, A.; Kondo, Y.; Li, S.-M.; Moteki, N.; Koike, M.; Oshima, N.; Schwarz, J. P.; Balkanski, Y.; Bauer, S. E.; Bellouin, N.; Berntsen, T. K.; Bian, H.; Chin, M.; Diehl, T.; Easter, R. C.; Ghan, S. J.; Iversen, T.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Penner, J. E.; Schulz, M.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Zhang, K.
2014-11-01
Atmospheric black carbon (BC) absorbs solar radiation, and exacerbates global warming through exerting positive radiative forcing (RF). However, the contribution of BC to ongoing changes in global climate is under debate. Anthropogenic BC emissions, and the resulting distribution of BC concentration, are highly uncertain. In particular, long-range transport and processes affecting BC atmospheric lifetime are poorly understood. Here we discuss whether recent assessments may have overestimated present-day BC radiative forcing in remote regions. We compare vertical profiles of BC concentration from four recent aircraft measurement campaigns to simulations by 13 aerosol models participating in the AeroCom Phase II intercomparison. An atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in remote ocean regions, in line with other recent studies. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in AeroCom Phase II median direct BC forcing, from fossil fuel and biofuel burning, over the industrial era. The sensitivity of modelled forcing to BC vertical profile and lifetime highlights an urgent need for further flight campaigns, close to sources and in remote regions, to provide improved quantification of BC effects for use in climate policy.
Samset, B. H.; Myhre, G.; Herber, A.; Kondo, Y.; Li, S.-M.; Moteki, N.; Koike, M.; Oshima, N.; Schwarz, J. P.; Balkanski, Y.; Bauer, S. E.; Bellouin, N.; Berntsen, T. K.; Bian, H.; Chin, M.; Diehl, T.; Easter, R. C.; Ghan, S. J.; Iversen, T.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Penner, J. E.; Schulz, M.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Zhang, K.
2014-08-01
Atmospheric black carbon (BC) absorbs solar radiation, and exacerbates global warming through exerting positive radiative forcing (RF). However, the contribution of BC to ongoing changes in global climate is under debate. Anthropogenic BC emissions, and the resulting distribution of BC concentration, are highly uncertain. In particular, long range transport and processes affecting BC atmospheric lifetime are poorly understood. Here we discuss whether recent assessments may have overestimated present day BC radiative forcing in remote regions. We compare vertical profiles of BC concentration from four recent aircraft measurement campaigns to simulations by 13 aerosol models participating in the AeroCom Phase II intercomparision. An atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in remote ocean regions, in line with other recent studies. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in AeroCom Phase II median direct BC forcing, from fossil fuel and biofuel burning, over the industrial era. The sensitivity of modeled forcing to BC vertical profile and lifetime highlights an urgent need for further flight campaigns, close to sources and in remote regions, to provide improved quantification of BC effects for use in climate policy.
Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin
2011-09-01
A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones). © 2011 Acoustical Society of America
Fontes, Fernando A. C. C.; Paiva, Luís T.
2016-10-01
We address optimal control problems for nonlinear systems with pathwise state-constraints. These are challenging non-linear problems for which the number of discretization points is a major factor determining the computational time. Also, the location of these points has a major impact in the accuracy of the solutions. We propose an algorithm that iteratively finds an adequate time-grid to satisfy some predefined error estimate on the obtained trajectories, which is guided by information on the adjoint multipliers. The obtained results show a highly favorable comparison against the traditional equidistant-spaced time-grid methods, including the ones using discrete-time models. This way, continuous-time plant models can be directly used. The discretization procedure can be automated and there is no need to select a priori the adequate time step. Even if the optimization procedure is forced to stop in an early stage, as might be the case in real-time problems, we can still obtain a meaningful solution, although it might be a less accurate one. The extension of the procedure to a Model Predictive Control (MPC) context is proposed here. By defining a time-dependent accuracy threshold, we can generate solutions that are more accurate in the initial parts of the receding horizon, which are the most relevant for MPC.
Energy Technology Data Exchange (ETDEWEB)
Samset, B. H.; Myhre, G.; Herber, Andreas; Kondo, Yutaka; Li, Shao-Meng; Moteki, N.; Koike, Makoto; Oshima, N.; Schwarz, Joshua P.; Balkanski, Y.; Bauer, S.; Bellouin, N.; Berntsen, T.; Bian, Huisheng; Chin, M.; Diehl, Thomas; Easter, Richard C.; Ghan, Steven J.; Iversen, T.; Kirkevag, A.; Lamarque, Jean-Francois; Lin, Guang; Liu, Xiaohong; Penner, Joyce E.; Schulz, M.; Seland, O.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, Kostas; Zhang, Kai
2014-11-27
Black carbon (BC) aerosols absorb solar radiation, and are generally held to exacerbate global warming through exerting a positive radiative forcing1. However, the total contribution of BC to the ongoing changes in global climate is presently under debate2-8. Both anthropogenic BC emissions and the resulting spatial and temporal distribution of BC concentration are highly uncertain2,9. In particular, long range transport and processes affecting BC atmospheric lifetime are poorly understood, leading to large estimated uncertainty in BC concentration at high altitudes and far from emission sources10. These uncertainties limit our ability to quantify both the historical, present and future anthropogenic climate impact of BC. Here we compare vertical profiles of BC concentration from four recent aircraft measurement campaigns with 13 state of the art aerosol models, and show that recent assessments may have overestimated present day BC radiative forcing. Further, an atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in transport dominated remote regions. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in the multi-model median direct BC forcing from fossil fuel and biofuel burning over the industrial era.
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J
2016-03-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.
Braibant, L.; Hutsemékers, D.; Sluse, D.; Goosmann, R.
2017-11-01
Recent studies have shown that line profile distortions are commonly observed in gravitationally lensed quasar spectra. Often attributed to microlensing differential magnification, line profile distortions can provide information on the geometry and kinematics of the broad emission line region (BLR) in quasars. We investigate the effect of gravitational microlensing on quasar broad emission line profiles and their underlying continuum, combining the emission from simple representative BLR models with generic microlensing magnification maps. Specifically, we considered Keplerian disk, polar, and equatorial wind BLR models of various sizes. The effect of microlensing has been quantified with four observables: μBLR, the total magnification of the broad emission line; μcont, the magnification of the underlying continuum; as well as red/blue, RBI and wings/core, WCI, indices that characterize the line profile distortions. The simulations showed that distortions of line profiles, such as those recently observed in lensed quasars, can indeed be reproduced and attributed to the differential effect of microlensing on spatially separated regions of the BLR. While the magnification of the emission line μBLR sets an upper limit on the BLR size and, similarly, the magnification of the continuum μcont sets an upper limit on the size of the continuum source, the line profile distortions mainly depend on the BLR geometry and kinematics. We thus built (WCI,RBI) diagrams that can serve as diagnostic diagrams to discriminate between the various BLR models on the basis of quantitative measurements. It appears that a strong microlensing effect puts important constraints on the size of the BLR and on its distance to the high-magnification caustic. In that case, BLR models with different geometries and kinematics are more prone to produce distinctive line profile distortions for a limited number of caustic configurations, which facilitates their discrimination. When the microlensing
Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.
2008-12-01
Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.
Directory of Open Access Journals (Sweden)
Tyler W. H. Backman
2018-01-01
Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.
Chapman, Brian
2017-06-01
This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis-Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis- substrate binding, cis → trans bound enzyme shuttling, trans -substrate dissociation and trans → cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive 'tuning' of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force-flux relations, with only a minority of cases having their quasi -linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically 'tuned' to its particular task, dependent on the cis- and trans- substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport.
Constraining models of initial state with v{sub 2} and v{sub 3} data from LHC and RHIC
Energy Technology Data Exchange (ETDEWEB)
Retinskaya, Ekaterina, E-mail: ekaterina.retinskaya@cea.fr [CEA, IPhT, Institut de physique théorique de Saclay, F-91191 Gif-sur-Yvette (France); Luzum, Matthew, E-mail: MWLuzum@lbl.gov [McGill University, 3600 University Street, Montreal QC H3A 2TS (Canada); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Ollitrault, Jean-Yves, E-mail: jean-yves.ollitrault@cea.fr [CNRS, URA2306, IPhT, Institut de physique théorique de Saclay, F-91191 Gif-sur-Yvette (France)
2014-06-15
We present a combined analysis of elliptic and triangular flow data from LHC and RHIC using viscous relativistic hydrodynamics. Elliptic flow v{sub 2} in hydrodynamics is proportional to the participant eccentricity ε{sub 2} and triangular flow is proportional to the participant triangularity ε{sub 3}, which means v{sub n}=κ{sub n}ε{sub n}, where κ{sub n} is the linear response coefficient in harmonic n. Experimental data for v{sub 2} and v{sub 3} combined with hydrodynamic calculations of κ{sub n} thus provide us with the rms values of initial anisotropies ε{sub 2} and ε{sub 3}. By varying free parameters in the hydro calculation (in particular the shear viscosity), we obtain an allowed band in the (rms ε{sub 2}, rms ε{sub 3}) plane. Comparison with Monte Carlo models of the initial state allows us to exclude several of these models. We illustrate that the effect of changing the granularity of the initial state is similar to changing the medium properties, making these effects difficult to disentangle.
Paul, Keryn I; England, Jacqueline R; Baker, Thomas G; Cunningham, Shaun C; Perring, Michael P; Polglase, Phil J; Wilson, Brian; Cavagnaro, Timothy R; Lewis, Tom; Read, Zoe; Madhavan, Dinesh B; Herrmann, Tim
2018-02-15
Reforestation of agricultural land with mixed-species environmental plantings of native trees and shrubs contributes to abatement of greenhouse gas emissions through sequestration of carbon, and to landscape remediation and biodiversity enhancement. Although accumulation of carbon in biomass is relatively well understood, less is known about associated changes in soil organic carbon (SOC) following different types of reforestation. Direct measurement of SOC may not be cost effective where rates of SOC sequestration are relatively small and/or highly spatially-variable, thereby requiring intensive sampling. Hence, our objective was to develop a verified modelling approach for determining changes in SOC to facilitate the inclusion of SOC in the carbon accounts of reforestation projects. We measured carbon stocks of biomass, litter and SOC (0-30cm) in 125 environmental plantings (often paired to adjacent agricultural sites), representing sites of varying productivity across the Australian continent. After constraining a carbon accounting model to observed measures of growth, allocation of biomass, and rates of litterfall and litter decomposition, the model was calibrated to maximise the efficiency of prediction of SOC and its fractions. Uncertainties in both measured and modelled results meant that efficiencies of prediction of SOC across the 125 contrasting plantings were only moderate, at 39-68%. Data-informed modelling nonetheless improved confidence in outputs from scenario analyses, confirming that: (i) reforestation on agricultural land highly depleted in SOC (i.e. previously under cropping) had the highest capacity to sequester SOC, particularly where rainfall was relatively high (>600mmyear -1 ), and; (ii) decreased planting width and increased stand density and the proportion of eucalypts enhanced rates of SOC sequestration. These results improve confidence in predictions of SOC following environmental reforestation under varying conditions. The calibrated
PSA modeling of long-term accident sequences
International Nuclear Information System (INIS)
Georgescu, Gabriel; Corenwinder, Francois; Lanore, Jeanne-Marie
2014-01-01
In the context of the extension of PSA scope to include external hazards, in France, both operator (EDF) and IRSN work for the improvement of methods to better take into account in the PSA the accident sequences induced by initiators which affect a whole site containing several nuclear units (reactors, fuel pools,...). These methodological improvements represent an essential prerequisite for the development of external hazards PSA. However, it has to be noted that in French PSA, even before Fukushima, long term accident sequences were taken into account: many insight were therefore used, as complementary information, to enhance the safety level of the plants. IRSN proposed an external events PSA development program. One of the first steps of the program is the development of methods to model in the PSA the long term accident sequences, based on the experience gained. At short term IRSN intends to enhance the modeling of the 'long term' accident sequences induced by the loss of the heat sink or/and the loss of external power supply. The experience gained by IRSN and EDF from the development of several probabilistic studies treating long term accident sequences shows that the simple extension of the mission time of the mitigation systems from 24 hours to longer times is not sufficient to realistically quantify the risk and to obtain a correct ranking of the risk contributions and that treatment of recoveries is also necessary. IRSN intends to develop a generic study which can be used as a general methodology for the assessment of the long term accident sequences, mainly generated by external hazards and their combinations. This first attempt to develop this generic study allowed identifying some aspects, which may be hazard (or combinations of hazards) or related to initial boundary conditions, which should be taken into account for further developments. (authors)
Constraining the climate and ocean pH of the early Earth with a geological carbon cycle model.
Krissansen-Totton, Joshua; Arney, Giada N; Catling, David C
2018-04-02
The early Earth's environment is controversial. Climatic estimates range from hot to glacial, and inferred marine pH spans strongly alkaline to acidic. Better understanding of early climate and ocean chemistry would improve our knowledge of the origin of life and its coevolution with the environment. Here, we use a geological carbon cycle model with ocean chemistry to calculate self-consistent histories of climate and ocean pH. Our carbon cycle model includes an empirically justified temperature and pH dependence of seafloor weathering, allowing the relative importance of continental and seafloor weathering to be evaluated. We find that the Archean climate was likely temperate (0-50 °C) due to the combined negative feedbacks of continental and seafloor weathering. Ocean pH evolves monotonically from [Formula: see text] (2σ) at 4.0 Ga to [Formula: see text] (2σ) at the Archean-Proterozoic boundary, and to [Formula: see text] (2σ) at the Proterozoic-Phanerozoic boundary. This evolution is driven by the secular decline of pCO 2 , which in turn is a consequence of increasing solar luminosity, but is moderated by carbonate alkalinity delivered from continental and seafloor weathering. Archean seafloor weathering may have been a comparable carbon sink to continental weathering, but is less dominant than previously assumed, and would not have induced global glaciation. We show how these conclusions are robust to a wide range of scenarios for continental growth, internal heat flow evolution and outgassing history, greenhouse gas abundances, and changes in the biotic enhancement of weathering. Copyright © 2018 the Author(s). Published by PNAS.
Maximal monotone model with delay term of convolution
Directory of Open Access Journals (Sweden)
Claude-Henri Lamarque
2005-01-01
Full Text Available Mechanical models are governed either by partial differential equations with boundary conditions and initial conditions (e.g., in the frame of continuum mechanics or by ordinary differential equations (e.g., after discretization via Galerkin procedure or directly from the model description with the initial conditions. In order to study dynamical behavior of mechanical systems with a finite number of degrees of freedom including nonsmooth terms (e.g., friction, we consider here problems governed by differential inclusions. To describe effects of particular constitutive laws, we add a delay term. In contrast to previous papers, we introduce delay via a Volterra kernel. We provide existence and uniqueness results by using an Euler implicit numerical scheme; then convergence with its order is established. A few numerical examples are given.
Modelling the Long-term Periglacial Imprint on Mountain Landscapes
DEFF Research Database (Denmark)
Andersen, Jane Lund; Egholm, David Lundbek; Knudsen, Mads Faurschou
Studies of periglacial processes usually focus on small-scale, isolated phenomena, leaving less explored questions of how such processes shape vast areas of Earth’s surface. Here we use numerical surface process modelling to better understand how periglacial processes drive large-scale, long-term....... For an array of mean annual air temperature and sediment cover thickness, we integrate the heat equation through annual cycles and record the intervals when conditions are optimal for frost-weathering and frost creep. Second, we incorporate these results into a landscape evolution model to explore the long-term...... erosion, while erosion may have slowed on higher surfaces. The latter implies that areas considered inactive, or relict, today may simply be too cold, or too flat, to allow for efficient periglacial activity under current conditions. The periglacial origins of such surfaces possibly pre-date the advent...
A Term Structure Model for Dividends and Interest Rates
Filipović, Damir; Willems, Sander
2018-01-01
Over the last decade, dividends have become a standalone asset class instead of a mere side product of an equity investment. We introduce a framework based on polynomial jump-diffusions to jointly price the term structures of dividends and interest rates. Prices for dividend futures, bonds, and the dividend paying stock are given in closed form. We present an efficient moment based approximation method for option pricing. In a calibration exercise we show that a parsimonious model specificati...
Taljegard, Maria; Brynolf, Selma; Grahn, Maria; Andersson, Karin; Johnson, Hannes
2014-11-04
The regionalized Global Energy Transition model has been modified to include a more detailed shipping sector in order to assess what marine fuels and propulsion technologies might be cost-effective by 2050 when achieving an atmospheric CO2 concentration of 400 or 500 ppm by the year 2100. The robustness of the results was examined in a Monte Carlo analysis, varying uncertain parameters and technology options, including the amount of primary energy resources, the availability of carbon capture and storage (CCS) technologies, and costs of different technologies and fuels. The four main findings are (i) it is cost-effective to start the phase out of fuel oil from the shipping sector in the next decade; (ii) natural gas-based fuels (liquefied natural gas and methanol) are the most probable substitutes during the study period; (iii) availability of CCS, the CO2 target, the liquefied natural gas tank cost and potential oil resources affect marine fuel choices significantly; and (iv) biofuels rarely play a major role in the shipping sector, due to limited supply and competition for bioenergy from other energy sectors.
Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2011-01-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(exp -26) cm(exp 3) / s at 5 GeV to about 5 X 10(exp -23) cm(exp 3)/ s at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (approx 3 X 10(exp -26) cm(exp 3)/s for a purely s-wave cross section), without assuming additional boost factors.
Constraining Dark Matter Models from a Combined Analysis of Milky Way Satellites with the Fermi Large Area Telescope
Energy Technology Data Exchange (ETDEWEB)
Ackermann, M.; Ajello, M.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Albert, A.; /Taiwan, Natl. Taiwan U. /Ohio State U.; Atwood, W.B.; /UC, Santa Cruz; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Barbiellini, G.; /INFN, Trieste /Trieste U.; Bastieri, D.; /INFN, Padua /Padua U.; Bechtol, K.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bellazzini, R.; /INFN, Pisa; Berenji, B.; Blandford, R.D.; Bloom, E.D.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bonamente, E.; /INFN, Perugia /Perugia U.; Borgland, A.W.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bregeon, J.; /INFN, Pisa; Brigida, M.; /Bari Polytechnic /INFN, Bari; Bruel, P.; /Ecole Polytechnique; Buehler, R.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Burnett, T.H.; /Washington U., Seattle; Buson, S.; /INFN, Padua /Padua U. /ICE, Bellaterra /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /INFN, Rome /Rome U. /IASF, Milan /IASF, Milan /DAPNIA, Saclay /INFN, Perugia /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Artep Inc. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /ASDC, Frascati /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Stockholm U. /Stockholm U., OKC /ASDC, Frascati /ASDC, Frascati /Udine U. /INFN, Trieste /Bari Polytechnic /INFN, Bari /Naval Research Lab, Wash., D.C. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Bari Polytechnic /INFN, Bari /Ecole Polytechnique /NASA, Goddard /Hiroshima U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Bari Polytechnic /INFN, Bari /INFN, Bari /ASDC, Frascati /NASA, Goddard /INFN, Perugia /Perugia U. /Bari Polytechnic /INFN, Bari /Bologna Observ. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /DAPNIA, Saclay /Alabama U., Huntsville; /more authors..
2012-09-14
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10{sup -26} cm{sup 3} s{sup -1} at 5 GeV to about 5 x 10{sup -23} cm{sup 3} s{sup -1} at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section ({approx}3 x 10{sup -26} cm{sup 3} s{sup -1} for a purely s-wave cross section), without assuming additional boost factors.
Profile constraining mechanism for Te profile invariance in tokamaks
International Nuclear Information System (INIS)
Becker, G.
1993-03-01
A new model for the electron temperature profile resilience in the outer half of tokamak plasmas is proposed and investigated. It is shown that introducing a negative feedback term into the scaling of the electron heat diffusivity χ e (r) = χ ee (r)(1-αa 2 /T eo d 2 T e /dr 2 ) with α > or ∼ 3 results in the observed T e profile stiffness against variations of the density and heating profiles. In addition, the feedback by itself yields the measured approximately linear T e (r) shape for r/a > or ∼ 0.5 in contrast to previous profile constraining models. (orig.). 6 figs
Short-Term Memory and Its Biophysical Model
Wang, Wei; Zhang, Kai; Tang, Xiao-wei
1996-12-01
The capacity of short-term memory has been studied using an integrate-and-fire neuronal network model. It is found that the storage of events depend on the manner of the correlation between the events, and the capacity is dominated by the value of after-depolarization potential. There is a monotonic increasing relationship between the value of after-depolarization potential and the memory numbers. The biophysics relevance of the network model is discussed and different kinds of the information processes are studied too.
A Simple Hybrid Model for Short-Term Load Forecasting
Directory of Open Access Journals (Sweden)
Suseelatha Annamareddi
2013-01-01
Full Text Available The paper proposes a simple hybrid model to forecast the electrical load data based on the wavelet transform technique and double exponential smoothing. The historical noisy load series data is decomposed into deterministic and fluctuation components using suitable wavelet coefficient thresholds and wavelet reconstruction method. The variation characteristics of the resulting series are analyzed to arrive at reasonable thresholds that yield good denoising results. The constitutive series are then forecasted using appropriate exponential adaptive smoothing models. A case study performed on California energy market data demonstrates that the proposed method can offer high forecasting precision for very short-term forecasts, considering a time horizon of two weeks.
Medium Term Hydroelectric Production Planning - A Multistage Stochastic Optimization Model
Directory of Open Access Journals (Sweden)
BITA ANALUI
2014-06-01
Full Text Available Multistage stochastic programming is a key technology for making decisions over time in an uncertain environment. One of the promising areas in which this technology is implementable, is medium term planning of electricity production and trading where decision makers are typically faced with uncertain parameters (such as future demands and market prices that can be described by stochastic processes in discrete time. We apply this methodology to hydrosystem operation assuming random electricity prices and random inflows to the reservoir system. After describing the multistage stochastic model a simple case study is presented. In particular we use the model for pricing an electricity delivery contract in the framework of indifference pricing.
Kochanek, C.
2005-07-01
We will use deep ACS imaging of the giant {15 arcsec} four-image z_s=1.734 lensed quasar SDSS 1004+4112, and its z_l=0.68 lensing galaxy cluster, to identify many additional multiply-imaged background galaxies. Combining the existing single orbit ACS I-band image with ground based data, we have definitely identified two multiply imaged galaxies with estimated redshifts of 2.6 and 4.3, about 15 probable images of background galaxies, and a point source in the core of the central cD galaxy, which is likely to be the faint, fifth image of the quasar. The new data will provide accurate photometric redshifts, confirm that the candidate fifth image has the same spectral energy distribution as the other quasar images, allow secure identification of additional multiply-lensed galaxies for improving the mass model, and permit identification of faint cluster members. Due to the high lens redshift and the broad redshift distribution of the lensed background sources, we should be able to use the source-redshift scaling of the Einstein radius that depends on {d_ls/d_os}, to derive a direct, geometric estimate of Omega_Lambda. The deeper images will also allow a weak lensing analysis to extend the mass distribution to larger radii. Unlike any other cluster lenses, the time delay between the lensed quasar images {already measured for the A-B images, and measurable for the others over the next few years}, breaks the so-called kappa-degeneracies that complicate weak-lensing analyses.
Lilith: a tool for constraining new physics from Higgs measurements
Bernon, Jérémy; Dumont, Béranger
2015-09-01
The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the patibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths.The results of Lilith can be used to constrain a wide class of new physics scenarios.
Bell, Shaun W.; Hansell, Richard A.; Chow, Judith C.; Tsay, Si-Chee; Hsu, N. Christina; Lin, Neng-Huei; Wang, Sheng-Hsiang; Ji, Qiang; Li, Can; Watson, John G.; Khlystov, Andrey
2013-10-01
During the spring of 2010, NASA Goddard's COMMIT ground-based mobile laboratory was stationed on Dongsha Island off the southwest coast of Taiwan, in preparation for the upcoming 2012 7-SEAS field campaign. The measurement period offered a unique opportunity for conducting detailed investigations of the optical properties of aerosols associated with different air mass regimes including background maritime and those contaminated by anthropogenic air pollution and mineral dust. What appears to be the first time for this region, a shortwave optical closure experiment (λ = 550 nm) for both scattering and absorption was attempted over a 12-day period during which aerosols exhibited the most change. Constraints to the optical model included combined SMPS and APS number concentration data for a continuum of fine and coarse-mode particle sizes up to PM2.5. We also take advantage of an IMPROVE chemical sampler to help constrain aerosol composition and mass partitioning of key elemental species including sea-salt, particulate organic matter, soil, non sea-salt sulfate, nitrate, and elemental carbon. Achieving full optical closure is hampered by limitations in accounting for the role of water vapor in the system, uncertainties in the instruments and the need for further knowledge in the source apportionment of the model's major chemical components. Nonetheless, our results demonstrate that the observed aerosol scattering and absorption for these diverse air masses are reasonably captured by the model, where peak aerosol events and transitions between key aerosols types are evident. Signatures of heavy polluted aerosol composed mostly of ammonium and non sea-salt sulfate mixed with some dust with transitions to background sea-salt conditions are apparent in the absorption data, which is particularly reassuring owing to the large variability in the imaginary component of the refractive indices. Consistency between the measured and modeled optical parameters serves as an
Bell, Shaun W.; Hansell, Richard A.; Chow, Judith C.; Tsay, Si-Chee; Wang, Sheng-Hsiang; Ji, Qiang; Li, Can; Watson, John G.; Khlystov, Andrey
2012-01-01
During the spring of 2010, NASA Goddard's COMMIT ground-based mobile laboratory was stationed on Dongsha Island off the southwest coast of Taiwan, in preparation for the upcoming 2012 7-SEAS field campaign. The measurement period offered a unique opportunity for conducting detailed investigations of the optical properties of aerosols associated with different air mass regimes including background maritime and those contaminated by anthropogenic air pollution and mineral dust. What appears to be the first time for this region, a shortwave optical closure experiment for both scattering and absorption was attempted over a 12-day period during which aerosols exhibited the most change. Constraints to the optical model included combined SMPS and APS number concentration data for a continuum of fine and coarse-mode particle sizes up to PM2.5. We also take advantage of an IMPROVE chemical sampler to help constrain aerosol composition and mass partitioning of key elemental species including sea-salt, particulate organic matter, soil, non sea-salt sulphate, nitrate, and elemental carbon. Our results demonstrate that the observed aerosol scattering and absorption for these diverse air masses are reasonably captured by the model, where peak aerosol events and transitions between key aerosols types are evident. Signatures of heavy polluted aerosol composed mostly of ammonium and non sea-salt sulphate mixed with some dust with transitions to background sea-salt conditions are apparent in the absorption data, which is particularly reassuring owing to the large variability in the imaginary component of the refractive indices. Extinctive features at significantly smaller time scales than the one-day sample period of IMPROVE are more difficult to reproduce, as this requires further knowledge concerning the source apportionment of major chemical components in the model. Consistency between the measured and modeled optical parameters serves as an important link for advancing remote
Model for low temperature oxidation during long term interim storage
International Nuclear Information System (INIS)
Desgranges, Clara; Bertrand, Nathalie; Gauvain, Danielle; Terlain, Anne; Poquillon, Dominique; Monceau, Daniel
2004-01-01
For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode during about one century. The metal lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies performed in the frame of the COCON program. An advanced model based on the description of elementary mechanisms involved in scale growth at low temperatures, like partial interfacial control of the oxidation kinetics and/or grain boundary diffusion, is developed in order to increase the reliability of the long term extrapolations deduced from basic models developed from short time experiments. Since only few experimental data on dry oxidation are available in the temperature range of interest, experiments have also been performed to evaluate the relevant input parameters for models like grain size of oxide scale, considering iron as simplified material. (authors)
Constrained minimization in C ++ environment
International Nuclear Information System (INIS)
Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.
1998-01-01
Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)
Bacour, C.; Maignan, F.; Porcar-Castell, A.; MacBean, N.; Goulas, Y.; Flexas, J.; Guanter, L.; Joiner, J.; Peylin, P.
2016-12-01
A new era for improving our knowledge of the terrestrial carbon cycle at the global scale has begun with recent studies on the relationships between remotely sensed Sun Induce Fluorescence (SIF) and plant photosynthetic activity (GPP), and the availability of such satellite-derived products now "routinely" produced from GOSAT, GOME-2, or OCO-2 observations. Assimilating SIF data into terrestrial ecosystem models (TEMs) represents a novel opportunity to reduce the uncertainty of their prediction with respect to carbon-climate feedbacks, in particular the uncertainties resulting from inaccurate parameter values. A prerequisite is a correct representation in TEMs of the several drivers of plant fluorescence from the leaf to the canopy scale, and in particular the competing processes of photochemistry and non photochemical quenching (NPQ).In this study, we present the first results of a global scale assimilation of GOME-2 SIF products within a new version of the ORCHIDEE land surface model including a physical module of plant fluorescence. At the leaf level, the regulation of fluorescence yield is simulated both by the photosynthesis module of ORCHIDEE to calculate the photochemical yield and by a parametric model to estimate NPQ. The latter has been calibrated on leaf fluorescence measurements performed for boreal coniferous and Mediterranean vegetation species. A parametric representation of the SCOPE radiative transfer model is used to model the plant fluorescence fluxes for PSI and PSII and the scaling up to the canopy level. The ORCHIDEE-FluOR model is firstly evaluated with respect to in situ measurements of plant fluorescence flux and photochemical yield for scots pine and wheat. The potentials of SIF data to constrain the modelled GPP are evaluated by assimilating one year of GOME-2-SIF products within ORCHIDEE-FluOR. We investigate in particular the changes in the spatial patterns of GPP following the optimization of the photosynthesis and phenology parameters
Model for low temperature oxidation during long term interim storage
International Nuclear Information System (INIS)
Desgranges, C.; Abbas, A.; Terlain, A.
2003-01-01
Low-alloyed steels or carbon steels are considered as candidate materials for the fabrication of some nuclear waste package containers for long term interim storage. The containers are required to remain retrievable for centuries. One factor limiting their performance on this time scale is corrosion. The estimation of the metal thickness lost by dry oxidation over such long periods requires the construction of reliable models from short-time experimental data. In a first step, models based on simplified oxidation theories have been derived from experimental data on iron and a low-alloy steel oxidation. Their extrapolation to long oxidation periods confirms that the expected damage due to dry oxidation could be small. In order to improve the reliability of these predictions advanced models taking into account the elementary processes involved in the whole oxidation mechanism, are under development. (authors)
Monte Carlo Euler approximations of HJM term structure financial models
Björk, Tomas
2012-11-22
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Energy Technology Data Exchange (ETDEWEB)
Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))
2011-07-15
When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution
Hudoyo, Luhur Partomo; Andriyana, Yudhie; Handoko, Budhi
2017-03-01
Quantile regression illustrates the distribution of conditional variable responses to various quantile desired values. Each quantile characterizes a certain point (center or tail) of a conditional distribution. This analysis is very useful for asymmetric conditional distribution, e.g. solid at the tail of the distribution, the truncated distribution and existence of outliers. One approach nonparametric method of predicting the conditional quantile objective function is Constrained B-Splines (COBS). COBS is a smoothing technique to accommodate the addition of constraints such as monotonicity, convexity and periodicity. In this study, we will change the minimum conditional quantile objective function in COBS into a linear programming problem. Linear programming problem is defined as the problem of minimizing and maximizing a linear function subject to linear constraints. The constraints may be equalities or inequalities. This research will discuss the relationship between education (mean years of schooling) and economic (household expenditure) levels at Central Sulawesi Province in 2014 which household level data provide more systematic evidence on positive relationship. So monotonicity (increasing) constraints will be used in COBS quantile regression model.
Classification Constrained Dimensionality Reduction
Raich, Raviv; Costa, Jose A.; Damelin, Steven B.; Hero III, Alfred O.
2008-01-01
Dimensionality reduction is a topic of recent interest. In this paper, we present the classification constrained dimensionality reduction (CCDR) algorithm to account for label information. The algorithm can account for multiple classes as well as the semi-supervised setting. We present an out-of-sample expressions for both labeled and unlabeled data. For unlabeled data, we introduce a method of embedding a new point as preprocessing to a classifier. For labeled data, we introduce a method tha...
Constrained analysis of topologically massive gravity
Energy Technology Data Exchange (ETDEWEB)
Barcelos-Neto, J. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Inst. de Fisica; Dargam, T.G. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Inst. de Fisica
1995-08-01
We quantize the Einstein gravity in the formalism of weak gravitational fields by using the constrained Hamiltonian method. Special emphasis is given to the 2+1 spacetime dimensional case where a (topological) Chern-Simons term is added to the Lagrangian. (orig.)
Multi-Complementary Model for Long-Term Tracking.
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-02-09
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed.
Source term identification in atmospheric modelling via sparse optimization
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the
Modelling of long term nitrogen retention in surface waters
Halbfaß, S.; Gebel, M.; Bürger, S.
2010-12-01
In order to derive measures to reduce nutrient loadings into waters in Saxony, we calculated nitrogen inputs with the model STOFFBILANZ on the regional scale. Thereby we have to compare our modelling results to measured loadings at the river basin outlets, considering long term nutrient retention in surface waters. The most important mechanism of nitrogen retention is the denitrification in the contact zone of water and sediment, being controlled by hydraulic and micro-biological processes. Retention capacity is derived on the basis of the nutrient spiralling concept, using water residence time (hydraulic aspect) and time-specific N-uptake by microorganisms (biological aspect). Short time related processes of mobilization and immobilization are neglected, because they are of minor importance for the derivation of measures on the regional scale.
Przybycin, Anna M.; Scheck-Wenderoth, Magdalena; Schneider, Michael
2014-05-01
The North Alpine Foreland Basin is situated in the northern front of the European Alps and extends over parts of France, Switzerland, Germany and Austria. It formed as a wedge shaped depression since the Tertiary in consequence of the Euro - Adriatic continental collision and the Alpine orogeny. The basin is filled with clastic sediments, the Molasse, originating from erosional processes of the Alps and underlain by Mesozoic sedimentary successions and a Paleozoic crystalline crust. For our study we have focused on the German part of the basin. To investigate the deep structure, the isostatic state and the load distribution of this region we have constructed a 3D structural model of the basin and the Alpine area using available depth and thickness maps, regional scale 3D structural models as well as seismic and well data for the sedimentary part. The crust (from the top Paleozoic down to the Moho (Grad et al. 2008)) has been considered as two-parted with a lighter upper crust and a denser lower crust; the partition has been calculated following the approach of isostatic equilibrium of Pratt (1855). By implementing a seismic Lithosphere-Asthenosphere-Boundary (LAB) (Tesauro 2009) the crustal scale model has been extended to the lithospheric-scale. The layer geometry and the assigned bulk densities of this starting model have been constrained by means of 3D gravity modelling (BGI, 2012). Afterwards the 3D load distribution has been calculated using a 3D finite element method. Our results show that the North Alpine Foreland Basin is not isostatically balanced and that the configuration of the crystalline crust strongly controls the gravity field in this area. Furthermore, our results show that the basin area is influenced by varying lateral load differences down to a depth of more than 150 km what allows a first order statement of the required compensating horizontal stress needed to prevent gravitational collapse of the system. BGI (2012). The International
A Long-Term Mathematical Model for Mining Industries
International Nuclear Information System (INIS)
Achdou, Yves; Giraud, Pierre-Noel; Lasry, Jean-Michel; Lions, Pierre-Louis
2016-01-01
A parcimonious long term model is proposed for a mining industry. Knowing the dynamics of the global reserve, the strategy of each production unit consists of an optimal control problem with two controls, first the flux invested into prospection and the building of new extraction facilities, second the production rate. In turn, the dynamics of the global reserve depends on the individual strategies of the producers, so the models leads to an equilibrium, which is described by low dimensional systems of partial differential equations. The dimensionality depends on the number of technologies that a mining producer can choose. In some cases, the systems may be reduced to a Hamilton–Jacobi equation which is degenerate at the boundary and whose right hand side may blow up at the boundary. A mathematical analysis is supplied. Then numerical simulations for models with one or two technologies are described. In particular, a numerical calibration of the model in order to fit the historical data is carried out.
A new Expert Finding model based on Term Correlation Matrix
Directory of Open Access Journals (Sweden)
Ehsan Pornour
2015-09-01
Full Text Available Due to the enormous volume of unstructured information available on the Web and inside organization, finding an answer to the knowledge need in a short time is difficult. For this reason, beside Search Engines which don’t consider users individual characteristics, Recommender systems were created which use user’s previous activities and other individual characteristics to help users find needed knowledge. Recommender systems usage is increasing every day. Expert finder systems also by introducing expert people instead of recommending information to users have provided this facility for users to ask their questions form experts. Having relation with experts not only causes information transition, but also with transferring experiences and inception causes knowledge transition. In this paper we used university professors academic resume as expert people profile and then proposed a new expert finding model that recommends experts to users query. We used Term Correlation Matrix, Vector Space Model and PageRank algorithm and proposed a new hybrid model which outperforms conventional methods. This model can be used in internet environment, organizations and universities that experts have resume dataset.
A Long-Term Mathematical Model for Mining Industries
Energy Technology Data Exchange (ETDEWEB)
Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr [Univ. Paris Diderot, Sorbonne Paris Cité, Laboratoire Jacques-Louis Lions, UMR 7598, UPMC, CNRS (France); Giraud, Pierre-Noel [CERNA, Mines ParisTech (France); Lasry, Jean-Michel [Univ. Paris Dauphine (France); Lions, Pierre-Louis [Collège de France (France)
2016-12-15
A parcimonious long term model is proposed for a mining industry. Knowing the dynamics of the global reserve, the strategy of each production unit consists of an optimal control problem with two controls, first the flux invested into prospection and the building of new extraction facilities, second the production rate. In turn, the dynamics of the global reserve depends on the individual strategies of the producers, so the models leads to an equilibrium, which is described by low dimensional systems of partial differential equations. The dimensionality depends on the number of technologies that a mining producer can choose. In some cases, the systems may be reduced to a Hamilton–Jacobi equation which is degenerate at the boundary and whose right hand side may blow up at the boundary. A mathematical analysis is supplied. Then numerical simulations for models with one or two technologies are described. In particular, a numerical calibration of the model in order to fit the historical data is carried out.
Tang, Shuaiqi
Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing
Kalashnikova, O.; Xu, F.; Ge, C.; Wang, J.; Garay, M. J.; Diner, D. J.
2014-12-01
Exposure to ambient particulate matter (PM) has been consistently linked to cardiovascular and respiratory health effects. Although PM is currently monitored by a network of surface stations, these are too sparsely distributed to provide the level of spatial detail needed to link different aerosol species to given health effects, and expansion to denser coverage is impractical and cost prohibitive. We present a methodology for combining Chemical Transport Model (CTM) aerosol type information and multiangular spectropolarimetric data to establish the signature of specific aerosol types in top-of-atmosphere measurements, and relate it to speciated surface PM2.5 loadings. In particular, we employ the WRF-Chem model run at the University of Nebraska, and remote sensing data from the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) to explore the feasibility of this approach. We demonstrate that the CTM does well in predicting the types of aerosols present at a given location and time, however large uncertainties currently exist in CTM estimates of the concentration of the various aerosol species (e.g., black carbon, sulfate, dust, etc.) leading to large uncertainties to model-derived speciated PM 2.5. In order to constrain CTM aerosol surface concentrations we use AirMSPI UV-VIS-NIR observations of intensity, and blue, red, and NIR observations of the Q and U Stokes parameters. We select specific scenes observed by AirMSPI and use WRF-Chem to generate an initial distribution of aerosol composition. The relevant optical properties for each aerosol species are used to calculate aerosol light scattering information. This is then used in a vector (polarized) 1-D radiative transfer model to determine at-instrument Stokes parameters for the specific AirMSPI viewing geometries. As a first step, a match is sought between the CTM-predicted radiances and the AirMSPI observations. Then, the total aerosol optical depth and fractions of various aerosol species are modified
Salameh, Dalia; Favez, Olivier; Golly, Benjamin; Besombes, Jean Luc; Alleman, Laurent; Albinet, Alexandre; Jaffrezo, Jean Luc
2017-04-01
Particulate matter (PM) is one of the most studied atmospheric pollutant in urban areas due to their adverse effects on human health (Pope et al., 2009). Intrinsic properties of PM (e.g. chemical composition and morphology) are directly linked to their origins. Therefore, a harmonized and comprehensive apportionment study of PM sources in urban environments is extremely required to connect source contributions with PM concentration levels and then develop effective PM abatement strategies. Multivariate receptor models such as Positive Matrix Factorization (PMF) are very useful and have been used worldwide for PM source apportionment (Viana et al., 2008). PMF uses a weighted least-squares fit and quantitatively determines source fingerprints (factors) and their contributions to the total PM mass. However, in many cases, it could be tricky to separate two factors that co-vary due to similar seasonal variations, making unclear the physical sense of the extracted factors. To address such issues of source collinearities, additional specific constraints are incorporated into the model (i.e., constrained PMF) based on user's external knowledge allowing better apportionment results. In this work and within the framework of the SOURCES project, a harmonized source apportionment approach has been implemented and applied for the determination of PM sources on a large number of sites (up to 20) of different typologies (e.g. urban background, industrial, traffic, rural and/or alpine sites) distributed all over France and previously investigated with annual or multiannual studies (2012-2016). A constrained PMF approach (using US-EPA PMF5.0 software) was applied to the comprehensive PM-offline chemical datasets (i.e. carbonaceous fraction, major ions, metals/trace elements, specific organic markers) in a harmonized way for all the investigated sites. Different types of specific chemical constraints from well-characterized sources were defined based on external knowledge and were
Kfoury, Adib; Ledoux, Frédéric; Roche, Cloé; Delmaire, Gilles; Roussel, Gilles; Courcot, Dominique
2016-02-01
The constrained weighted-non-negative matrix factorization (CW-NMF) hybrid receptor model was applied to study the influence of steelmaking activities on PM2.5 (particulate matter with equivalent aerodynamic diameter less than 2.5 μm) composition in Dunkerque, Northern France. Semi-diurnal PM2.5 samples were collected using a high volume sampler in winter 2010 and spring 2011 and were analyzed for trace metals, water-soluble ions, and total carbon using inductively coupled plasma--atomic emission spectrometry (ICP-AES), ICP--mass spectrometry (ICP-MS), ionic chromatography and micro elemental carbon analyzer. The elemental composition shows that NO3(-), SO4(2-), NH4(+) and total carbon are the main PM2.5 constituents. Trace metals data were interpreted using concentration roses and both influences of integrated steelworks and electric steel plant were evidenced. The distinction between the two sources is made possible by the use Zn/Fe and Zn/Mn diagnostic ratios. Moreover Rb/Cr, Pb/Cr and Cu/Cd combination ratio are proposed to distinguish the ISW-sintering stack from the ISW-fugitive emissions. The a priori knowledge on the influencing source was introduced in the CW-NMF to guide the calculation. Eleven source profiles with various contributions were identified: 8 are characteristics of coastal urban background site profiles and 3 are related to the steelmaking activities. Between them, secondary nitrates, secondary sulfates and combustion profiles give the highest contributions and account for 93% of the PM2.5 concentration. The steelwork facilities contribute in about 2% of the total PM2.5 concentration and appear to be the main source of Cr, Cu, Fe, Mn, Zn. Copyright © 2015. Published by Elsevier B.V.
Dasgupta, R.; Ding, S.
2017-12-01
Primitive OIBs can be used to constrain the inventory and heterogeneity of volatile elements such as sulfur (S) in the mantle with potentially large range of mantle potential temperature, TP of 1400-1650 °C and different source lithologies. Yet, no systematic study exists that fully models the extraction of sulfur from a sulfide-bearing heterogeneous mantle relevant for oceanic intraplate magmatism. We modeled the evolution of S and Cu during mantle decompression melting by combining experimental partial melt compositions as a function of P-T and sulfur contents at sulfide saturation (SCSS) models [1, 2]. Calculations at TP =1450-1650 °C to explain S and Cu inventory of near-primary ocean island basalts (OIBs), suggests that partial melts relevant to OIBs have higher SCSS than that of primitive MORBs, because of the positive effect of temperature on SCSS. Hence, for a given sulfide content in the mantle, hotter mantle consumes sulfide more efficiently. Calculation of SCSS along melting adiabats at TP = 1450-1550 °C with variable initial S content of peridotite indicates that sulfide undersaturated primitive Icelandic basalts with 800-950 ppm S and 80-110 ppm Cu can be generated by 10-25 wt.% melting of peridotite containing 100-250 ppm S. However, S and Cu budgets of OIBs that are thought to represent low-degree melts can only be satisfied if equilibration with a sulfide melt with ≥25-30 wt.% Ni is applicable. Alternatively, if Ni content in equilibrium sulfide in peridotitic mantle is ≤20-25 wt.%, mixing of low-degree partial melts of MORB-eclogite and metapelite, with peridotite partial melt is necessary to reconcile the measured S and Cu contents in the low-F (Galapagos, Lau Basin, Loihi and Samoa for TP of 1450-1650 °C. In this latter case, 25-150 ppm S in the peridotite mantle can be exhausted by 1-9 wt.% partial melting. Bulk S content of the heterogeneous mantle source of these islands is high because of the presence of subducted eclogite
Modeling of long-term energy system of Japan
International Nuclear Information System (INIS)
Gotoh, Yoshitaka; Sato, Osamu; Tadokoro, Yoshihiro
1999-07-01
In order to analyze the future potential of reducing carbon dioxide emissions, the long-term energy system of Japan was modeled following the framework of the MARKAL model, and the database of energy technology characteristics was developed. First, a reference energy system was built by incorporating all important energy sources and technologies that will be available until the year 2050. This system consists of 25 primary energy sources, 33 technologies for electric power generation and/or low temperature heat production, 97 technologies for energy transformation, storage, and distribution, and 170 end-use technologies. Second, the database was developed for the characteristics of individual technologies in the system. The characteristic data consists of input and output of energy carriers, efficiency, availability, lifetime, investment cost, operation and maintenance cost, CO 2 emission coefficient, and others. Since a large number of technologies are included in the system, this report focuses modeling of a supply side, and involves the database of energy technologies other than for end-use purposes. (author)
The IEA Model of Short-term Energy Security
Energy Technology Data Exchange (ETDEWEB)
NONE
2011-07-01
Ensuring energy security has been at the centre of the IEA mission since its inception, following the oil crises of the early 1970s. While the security of oil supplies remains important, contemporary energy security policies must address all energy sources and cover a comprehensive range of natural, economic and political risks that affect energy sources, infrastructures and services. In response to this challenge, the IEA is currently developing a Model Of Short-term Energy Security (MOSES) to evaluate the energy security risks and resilience capacities of its member countries. The current version of MOSES covers short-term security of supply for primary energy sources and secondary fuels among IEA countries. It also lays the foundation for analysis of vulnerabilities of electricity and end-use energy sectors. MOSES contains a novel approach to analysing energy security, which can be used to identify energy security priorities, as a starting point for national energy security assessments and to track the evolution of a country's energy security profile. By grouping together countries with similar 'energy security profiles', MOSES depicts the energy security landscape of IEA countries. By extending the MOSES methodology to electricity security and energy services in the future, the IEA aims to develop a comprehensive policy-relevant perspective on global energy security. This Working Paper is intended for readers who wish to explore the MOSES methodology in depth; there is also a brochure which provides an overview of the analysis and results.
[Model of short-term psychological intervention in psychosomatic gynaecology].
Zutter, A-M; Bianchi-Demicheli, F
2004-02-01
This article proposes a rapid psychological intervention model in psychosomatic gynaecology. The work draws from the method developed by Dr H. Davanloo (Intensive Short Term Dynamic Psychotherapy). First it consists in identifying and clarifying the defence mechanisms, second in exercising pressure on them. This pressure causes an increase in anxiety, an intensification of the defence mechanisms and the development of an intrapsychic crisis that induces emotions and painful feelings linked to past traumata. This activation of the unconscious can activate somatic symptoms (pain, unconscious movements, tics, muscular tensions) that highlight the link between the physical and psychic aspects. This work allows a rapid access to the painful emotions that turn to symptom. It indicates the therapeutic intervention zones and levels. It allows translating psychic reality in a simple, fast and efficient way. It brings heightened consciousness and comprehension for the therapist and the patient.
Modelling the long-term vertical dynamics of salt marshes
Zoccarato, Claudia; Teatini, Pietro
2017-04-01
Salt marshes are vulnerable environments hosting complex interactions between physical and biological processes with a strong influence on the dynamics of the marsh evolution. The estimation and prediction of the elevation of a salt-marsh platform is crucial to forecast the marsh growth or regression under different scenarios considering, for example, the potential climate changes. The long-term vertical dynamics of a salt marsh is predicted with the aid of an original finite-element (FE) numerical model accounting for the marsh accretion and compaction and for the variation rates of the relative sea level rise, i.e., land subsidence of the marsh basement and eustatic rise of the sea level. The accretion term considers the vertical sedimentation of organic and inorganic material over the marsh surface, whereas the compaction reflects the progressive consolidation of the porous medium under the increasing load of the overlying younger deposits. The modelling approach is based on a 2D groundwater flow simulator, which provides the pressure evolution within a compacting/accreting vertical cross-section of the marsh assuming that the groundwater flow obeys the relative Darcy's law, coupled to a 1D vertical geomechanical module following Terzaghi's principle of effective intergranular stress. Soil porosity, permeability, and compressibility may vary with the effective intergranular stress according to empirically based relationships. The model also takes into account the geometric non-linearity arising from the consideration of large solid grain movements by using a Lagrangian approach with an adaptive FE mesh. The element geometry changes in time to follow the deposit consolidation and the element number increases in time to follow the sedimentation of new material. The numerical model is tested on different realistic configurations considering the influence of (i) the spatial distribution of the sedimentation rate in relation to the distance from the marsh margin, (ii
Synthesizing long-term sea level rise projections - the MAGICC sea level model v2.0
Nauels, Alexander; Meinshausen, Malte; Mengel, Matthias; Lorbacher, Katja; Wigley, Tom M. L.
2017-06-01
Sea level rise (SLR) is one of the major impacts of global warming; it will threaten coastal populations, infrastructure, and ecosystems around the globe in coming centuries. Well-constrained sea level projections are needed to estimate future losses from SLR and benefits of climate protection and adaptation. Process-based models that are designed to resolve the underlying physics of individual sea level drivers form the basis for state-of-the-art sea level projections. However, associated computational costs allow for only a small number of simulations based on selected scenarios that often vary for different sea level components. This approach does not sufficiently support sea level impact science and climate policy analysis, which require a sea level projection methodology that is flexible with regard to the climate scenario yet comprehensive and bound by the physical constraints provided by process-based models. To fill this gap, we present a sea level model that emulates global-mean long-term process-based model projections for all major sea level components. Thermal expansion estimates are calculated with the hemispheric upwelling-diffusion ocean component of the simple carbon-cycle climate model MAGICC, which has been updated and calibrated against CMIP5 ocean temperature profiles and thermal expansion data. Global glacier contributions are estimated based on a parameterization constrained by transient and equilibrium process-based projections. Sea level contribution estimates for Greenland and Antarctic ice sheets are derived from surface mass balance and solid ice discharge parameterizations reproducing current output from ice-sheet models. The land water storage component replicates recent hydrological modeling results. For 2100, we project 0.35 to 0.56 m (66 % range) total SLR based on the RCP2.6 scenario, 0.45 to 0.67 m for RCP4.5, 0.46 to 0.71 m for RCP6.0, and 0.65 to 0.97 m for RCP8.5. These projections lie within the range of the latest IPCC SLR
DEFF Research Database (Denmark)
Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos
2008-01-01
We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q....... This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different...... from the conventional distance joins and closest pairs problems. We are not aware of efficient processing algorithms for RCJ in the literature. A brute-force solution requires computational cost quadratic to input size and it does not scale well for large datasets. In view of this, we develop efficient...
Dynamic Hybrid Model for Short-Term Electricity Price Forecasting
Directory of Open Access Journals (Sweden)
Marin Cerjan
2014-05-01
Full Text Available Accurate forecasting tools are essential in the operation of electric power systems, especially in deregulated electricity markets. Electricity price forecasting is necessary for all market participants to optimize their portfolios. In this paper we propose a hybrid method approach for short-term hourly electricity price forecasting. The paper combines statistical techniques for pre-processing of data and a multi-layer (MLP neural network for forecasting electricity price and price spike detection. Based on statistical analysis, days are arranged into several categories. Similar days are examined by correlation significance of the historical data. Factors impacting the electricity price forecasting, including historical price factors, load factors and wind production factors are discussed. A price spike index (CWI is defined for spike detection and forecasting. Using proposed approach we created several forecasting models of diverse model complexity. The method is validated using the European Energy Exchange (EEX electricity price data records. Finally, results are discussed with respect to price volatility, with emphasis on the price forecasting accuracy.
Modelling substorm chorus events in terms of dispersive azimuthal drift
Directory of Open Access Journals (Sweden)
A. B. Collier
2004-12-01
Full Text Available The Substorm Chorus Event (SCE is a radio phenomenon observed on the ground after the onset of the substorm expansion phase. It consists of a band of VLF chorus with rising upper and lower cutoff frequencies. These emissions are thought to result from Doppler-shifted cyclotron resonance between whistler mode waves and energetic electrons which drift into a ground station's field of view from an injection site around midnight. The increasing frequency of the emission envelope has been attributed to the combined effects of energy dispersion due to gradient and curvature drifts, and the modification of resonance conditions and variation of the half-gyrofrequency cutoff resulting from the radial component of the ExB drift.
A model is presented which accounts for the observed features of the SCE in terms of the growth rate of whistler mode waves due to anisotropy in the electron distribution. This model provides an explanation for the increasing frequency of the SCE lower cutoff, as well as reproducing the general frequency-time signature of the event. In addition, the results place some restrictions on the injected particle source distribution which might lead to a SCE.
Key words. Space plasma physics (Wave-particle interaction – Magnetospheric physics (Plasma waves and instabilities; Storms and substorms
Modelling substorm chorus events in terms of dispersive azimuthal drift
Directory of Open Access Journals (Sweden)
A. B. Collier
2004-12-01
Full Text Available The Substorm Chorus Event (SCE is a radio phenomenon observed on the ground after the onset of the substorm expansion phase. It consists of a band of VLF chorus with rising upper and lower cutoff frequencies. These emissions are thought to result from Doppler-shifted cyclotron resonance between whistler mode waves and energetic electrons which drift into a ground station's field of view from an injection site around midnight. The increasing frequency of the emission envelope has been attributed to the combined effects of energy dispersion due to gradient and curvature drifts, and the modification of resonance conditions and variation of the half-gyrofrequency cutoff resulting from the radial component of the ExB drift. A model is presented which accounts for the observed features of the SCE in terms of the growth rate of whistler mode waves due to anisotropy in the electron distribution. This model provides an explanation for the increasing frequency of the SCE lower cutoff, as well as reproducing the general frequency-time signature of the event. In addition, the results place some restrictions on the injected particle source distribution which might lead to a SCE. Key words. Space plasma physics (Wave-particle interaction – Magnetospheric physics (Plasma waves and instabilities; Storms and substorms
Constraints on the affinity term for modeling long-term glass dissolution rates
International Nuclear Information System (INIS)
Bourcier, W.L.; Carroll, S.A.; Phillips, B.L.
1993-11-01
Predictions of long-term glass dissolution rates are highly dependent on the form of the affinity term in the rate expression. Analysis of the quantitative effect of saturation state on glass dissolution rate for CSG glass (a simple analog of SRL-165 glass), shows that a simple (1-Q/K) affinity term does not match experimental results. Our data at 100 degree C show that the data is better fit by an affinity term having the form (1 - (Q/K) 1 /σ) where σ = 10
A modelling study of long term green roof retention performance.
Stovin, Virginia; Poë, Simon; Berretta, Christian
2013-12-15
This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Delgado, F.; Kubanek, J.; Anderson, K. R.; Lundgren, P.; Pritchard, M. E.
2017-12-01
The 2011-2012 eruption of Cordón Caulle volcano in Chile is the best scientifically observed rhyodacitic eruption and is thus a key place to understand the dynamics of these rare but powerful explosive rhyodacitic eruptions. Because the volatile phase controls both the eruption temporal evolution and the eruptive style, either explosive or effusive, it is important to constrain the physical parameters that drive these eruptions. The eruption began explosively and after two weeks evolved into a hybrid explosive - lava flow effusion whose volume-time evolution we constrain with a series of TanDEM-X Digital Elevation Models. Our data shows the intrusion of a large volume laccolith or cryptodome during the first 2.5 months of the eruption and lava flow effusion only afterwards, with a total volume of 1.4 km3. InSAR data from the ENVISAT and TerraSAR-X missions shows more than 2 m of subsidence during the effusive eruption phase produced by deflation of a finite spheroidal source at a depth of 5 km. In order to constrain the magma total H2O content, crystal cargo, and reservoir pressure drop we numerically solve the coupled set of equations of a pressurized magma reservoir, magma conduit flow and time dependent density, volatile exsolution and viscosity that we use to invert the InSAR and topographic data time series. We compare the best-fit model parameters with independent estimates of magma viscosity and total gas content measured from lava samples. Preliminary modeling shows that although it is not possible to model both the InSAR and the topographic data during the onset of the laccolith emplacement, it is possible to constrain the magma H2O and crystal content, to 4% wt and 30% which agree well with published literature values.
International Nuclear Information System (INIS)
Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J.
2010-01-01
Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to
Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.
2017-06-01
In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol
Bagging constrained equity premium predictors
DEFF Research Database (Denmark)
Hillebrand, Eric; Lee, Tae-Hwy; Medeiros, Marcelo
2014-01-01
The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity of the r......The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity...... of the regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. Monte Carlo simulations show that forecast gains can be achieved in realistic sample sizes for the stock...... return problem. In an empirical application using the data set of Campbell and Thompson (2008), we show that we can improve the forecast performance further by smoothing the restriction through bagging....
Considering extraction constraints in long-term oil price modelling
International Nuclear Information System (INIS)
Rehrl, Tobias; Friedrich, Rainer; Voss, Alfred
2005-01-01
Apart from divergence about the remaining global oil resources, the peak oil discussion can be reduced to a dispute about the time rate at which these resources can be supplied. On the one hand it is problematic to project oil supply trends without taking both - prices as well as supply costs - explicitly into account. On the other hand are supply cost estimates however itself heavily dependent on the underlying extraction rates and are actually only valid within a certain business-as-usual extraction rate scenario (which itself is the task to determine). In fact, even after having applied enhanced recovery technologies, the rate at which an oil field can be exploited is quite restricted. Above a certain level an additional extraction rate increase can only be costly achieved at risks of losses in the overall recoverable amounts of the oil reservoir and causes much higher marginal cost. This inflexibility in extraction can be overcome in principle by the access to new oil fields. This indicates why the discovery trend may roughly form the long-term oil production curve, at least for price-taking suppliers. The long term oil discovery trend itself can be described as a logistic process with the two opposed effects of learning and depletion. This leads to the well-known Hubbert curve. Several attempts have been made to incorporate economic variables econometrically into the Hubbert model. With this work we follow a somewhat inverse approach and integrate Hubbert curves in our Long-term Oil Price and EXtraction model LOPEX. In LOPEX we assume that non-OPEC oil production - as long as the oil can be profitably discovered and extracted - is restricted to follow self-regulative discovery trends described by Hubbert curves. Non-OPEC production in LOPEX therefore consists of those Hubbert cycles that are profitable, depending on supply cost and price. Endogenous and exogenous technical progress is extra integrated in different ways. LOPEX determines extraction and price
Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective
Energy Technology Data Exchange (ETDEWEB)
Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)
2017-11-01
Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.
Long-term durum wheat monoculture: modelling and future projection
Directory of Open Access Journals (Sweden)
Ettore Bernardoni
2012-03-01
Full Text Available The potential effects of future climate change on grain production of a winter durum wheat cropping system were investigated. Based on future climate change projections, derived from a statistical downscaling process applied to the HadCM3 general circulation model and referred to two IPCC scenarios (A2 and B1, the response on yield and aboveground biomass (AGB and the variation in total organic carbon (TOC were explored. The software used in this work is an hybrid dynamic simulation model able to simulate, under different pedoclimatic conditions, the processes involved in cropping system such as crop growth and development, water and nitrogen balance. It implements different approaches in order to ensure accurate simulation of the mainprocess related to soil-crop-atmosphere continuum.The model was calibrated using soil data, crop yield, AGB and phenology coming from a long-term experiment, located in Apulia region. The calibration was performed using data collected in the period 1978–1990; validation was carried out on the 1991–2009 data. Phenology simulation was sufficiently accurate, showing some limitation only in predicting the physiological maturity. Yields and AGBs were predicted with an acceptable accuracy during both calibration and validation. CRM resulted always close to optimum value, EF in every case scored positive value, the value of index r2 was good, although in some cases values lower than 0.6 were calculated. Slope of the linear regression equation between measured and simulated values was always close to 1, indicating an overall good performance of the model. Both future climate scenarios led to a general increase in yields but a slightly decrease in AGB values. Data showed variations in the total production and yield among the different periods due to the climate variation. TOC evolution suggests that the combination of temperature and precipitation is the main factor affecting TOC variation under future scenarios
Creating a Long-Term Diabetic Rabbit Model
Directory of Open Access Journals (Sweden)
Jianpu Wang
2010-01-01
Full Text Available This study was to create a long-term rabbit model of diabetes mellitus for medical studies of up to one year or longer and to evaluate the effects of chronic hyperglycemia on damage of major organs. A single dose of alloxan monohydrate (100 mg/kg was given intravenously to 20 young New Zealand White rabbits. Another 12 age-matched normal rabbits were used as controls. Hyperglycemia developed within 48 hours after treatment with alloxan. Insulin was given daily after diabetes developed. All animals gained some body weight, but the gain was much less than the age-matched nondiabetic rabbits. Hyperlipidemia, higher blood urea nitrogen and creatinine were found in the diabetic animals. Histologically, the pancreas showed marked beta cell damage. The kidneys showed significantly thickened afferent glomerular arterioles with narrowed lumens along with glomerular atrophy. Lipid accumulation in the cytoplasm of hepatocytes appeared as vacuoles. Full-thickness skin wound healing was delayed. In summary, with careful management, alloxan-induced diabetic rabbits can be maintained for one year or longer in reasonably good health for diabetic studies.
Marine and Coastal Morphology: medium term and long-term area modelling
DEFF Research Database (Denmark)
Kristensen, Sten Esbjørn
This thesis documents development and application of a modelling concept developed in collaboration between DTU and DHI. The modelling concept is used in morphological modelling in coastal areas where the governing sediment transport processes are due to wave action. The modelling concept...... evolution model and apply them to problems concerning coastal protection strategies (both hard and soft measures). The applied coastal protection strategies involve morphological impact of detached shore parallel segmented breakwaters and shore normal impermeable groynes in groyne fields, and morphological...
Ala-aho, Pertti; Tetzlaff, Doerthe; McNamara, James P.; Laudon, Hjalmar; Soulsby, Chris
2017-10-01
Tracer-aided hydrological models are increasingly used to reveal fundamentals of runoff generation processes and water travel times in catchments. Modelling studies integrating stable water isotopes as tracers are mostly based in temperate and warm climates, leaving catchments with strong snow influences underrepresented in the literature. Such catchments are challenging, as the isotopic tracer signals in water entering the catchments as snowmelt are typically distorted from incoming precipitation due to fractionation processes in seasonal snowpack. We used the Spatially distributed Tracer-Aided Rainfall-Runoff (STARR) model to simulate fluxes, storage, and mixing of water and tracers, as well as estimating water ages in three long-term experimental catchments with varying degrees of snow influence and contrasting landscape characteristics. In the context of northern catchments the sites have exceptionally long and rich data sets of hydrometric data and - most importantly - stable water isotopes for both rain and snow conditions. To adapt the STARR model for sites with strong snow influence, we used a novel parsimonious calculation scheme that takes into account the isotopic fractionation through snow sublimation and snowmelt. The modified STARR setup simulated the streamflows, isotope ratios, and snow pack dynamics quite well in all three catchments. From this, our simulations indicated contrasting median water ages and water age distributions between catchments brought about mainly by differences in topography and soil characteristics. However, the variable degree of snow influence in catchments also had a major influence on the stream hydrograph, storage dynamics, and water age distributions, which was captured by the model. Our study suggested that snow sublimation fractionation processes can be important to include in tracer-aided modelling for catchments with seasonal snowpack, while the influence of fractionation during snowmelt could not be unequivocally
Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective
Energy Technology Data Exchange (ETDEWEB)
Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy
2017-11-03
Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.
Coding for Two Dimensional Constrained Fields
DEFF Research Database (Denmark)
Laursen, Torben Vaarbye
2006-01-01
a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....
Robust stability in constrained predictive control through the Youla parameterisations
DEFF Research Database (Denmark)
Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad
2011-01-01
In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...... for hard input constraints. In more detail, we parameterise the MPC predictions in terms of the primary Youla parameter and use this parameter as the on-line optimisation variable. The uncertainty is parameterised in terms of the dual Youla parameter. Stability can then be guaranteed through small gain...... arguments on the loop consisting of the primary and dual Youla parameter. This is included in the MPC optimisation as a constraint on the induced gain of the optimisation variable. We illustrate the method with a numerical simulation example....
Modelling long-term oil price and extraction with a Hubbert approach: The LOPEX model
International Nuclear Information System (INIS)
Rehrl, Tobias; Friedrich, Rainer
2006-01-01
The LOPEX (Long-term Oil Price and EXtraction) model generates long-term scenarios about future world oil supply and corresponding price paths up to the year 2100. In order to determine oil production in non-OPEC countries, the model uses Hubbert curves. Hubbert curves reflect the logistic nature of the discovery process and the associated constraint on temporal availability of oil. Extraction paths and world oil price path are both derived endogenously from OPEC's intertemporally optimal cartel behaviour. Thereby OPEC is faced with both the price-dependent production of the non-OPEC competitive fringe and the price-dependent world oil demand. World oil demand is modelled with a constant price elasticity function and refers to a scenario from ACROPOLIS-POLES. LOPEX results indicate a significant higher oil price from around 2020 onwards compared to the reference scenario, and a stagnating market share of maximal 50% to be optimal for OPEC
Chance Constrained Optimization for Targeted Internet Advertising
Deza, Antoine; Huang, Kai; Metel, Michael R.
2014-01-01
We introduce a chance constrained optimization model for the fulfillment of guaranteed display Internet advertising campaigns. The proposed formulation for the allocation of display inventory takes into account the uncertainty of the supply of Internet viewers. We discuss and present theoretical and computational features of the model via Monte Carlo sampling and convex approximations. Theoretical upper and lower bounds are presented along with a numerical substantiation.
Security-constrained unit commitment with flexible operating modes
Lu, Bo
The electricity industry throughout the world, which has long been dominated by vertically integrated utilities, is facing enormous challenges. To enhance the competition in electricity industry, vertically integrated utilities are evolving into a distributed and competitive industry in which market forces drive the price of electricity and possibly reduce the net cost of supplying electrical loads through increased competition. To excel in the competition, generation companies (GENCOs) will acquire additional generating units with flexible operating capability which allows a timely response to the continuous changes in power system conditions. This dissertation considers the short-term scheduling of generating units with flexible modes of operation in security-constrained unit commitment (SCUC). Among the units considered in this study are combined cycle units, fuel switching/blending units, photovoltaic/battery system, pumped-storage units, and cascaded hydro units. The proposed security-constrained unit commitment solution will include a detailed model of transmission system which could impact the short-term scheduling of units with flexible operation modes.
Human models of migraine - short-term pain for long-term gain
DEFF Research Database (Denmark)
Ashina, Messoud; Hansen, Jakob Møller; Á Dunga, Bára Oladóttir
2017-01-01
mechanisms by experimentally inducing migraine attacks. In this Review, we summarize the existing experimental models of migraine in humans, including those that exploit nitric oxide, histamine, neuropeptide and prostaglandin signalling. We describe the development and use of these models in the discovery...... of molecular pathways that are responsible for initiation of migraine attacks. Combining experimental human models with advanced imaging techniques might help to identify biomarkers of migraine, and in the ongoing search for new and better migraine treatments, human models will have a key role in the discovery...
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
A long term model of circulation. [human body
White, R. J.
1974-01-01
A quantitative approach to modeling human physiological function, with a view toward ultimate application to long duration space flight experiments, was undertaken. Data was obtained on the effect of weightlessness on certain aspects of human physiological function during 1-3 month periods. Modifications in the Guyton model are reviewed. Design considerations for bilateral interface models are discussed. Construction of a functioning whole body model was studied, as well as the testing of the model versus available data.
Divers, Marion T; Elliott, Emily M; Bain, Daniel J
2013-02-19
Leaking sewer infrastructure contributes nonpoint nitrogen pollution to groundwater and surface water in urban watersheds. However, these inputs are poorly quantified in watershed budgets, potentially underestimating pollutant loadings. In this study, we used inverse methods to constrain dissolved inorganic nitrogen (DIN) inputs from sewage to Nine Mile Run (NMR), an urban watershed (1570 ha) in Pittsburgh, Pennsylvania (USA) characterized by extensive impervious surface cover (38%). Water samples were collected biweekly over two years and intensive sampling was conducted during one summer storm. A nitrogen budget for the NMR watershed was constructed, ultimately inverted, and sewage DIN inputs constrained using Monte Carlo simulation. Results reveal substantial DIN contributions from sewage ranging from 6 to 14 kg ha-1 yr-1. When conservative estimates of DIN from sewage are included in input calculations, DIN retention in NMR is comparable to high rates observed in other suburban/urban nutrient budgets (84%). These results suggest a pervasive influence of leaking sewers during baseflow conditions and indicate that sewage-sourced DIN is not limited to sewer overflow events. Further, they highlight the importance of sewage inputs to DIN budgets in urban streams, particularly as sewer systems age across the U.S.
Essays on financial econometrics : modeling the term structure of interest rates
Bouwman, Kees Evert
2008-01-01
This dissertation bundles five studies in financial econometrics that are related to the theme of modeling the term structure of interest rates. The main contribution of this dissertation is a new arbitrage-free term structure model that is applied in an empirical analysis of the US term structure.
D-term contributions and CEDM constraints in E6 × SU(2)F × U(1)A SUSY GUT model
Shigekami, Yoshihiro
2017-11-01
We focus on E6 × SU(2)F × U(1)A supersymmetric (SUSY) grand unified theory (GUT) model. In this model, realistic Yukawa hierarchies and mixings are realized by introducing all allowed interactions with 𝓞(1) coefficients. Moreover, we can take stop mass is smaller than the other sfermion masses. This type of spectrum called by natural SUSY type sfermion mass spectrum can suppress the SUSY contributions to flavor changing neutral current (FCNC) and stabilize weak scale at the same time. However, light stop predicts large up quark CEDM and stop contributions are not decoupled. Since there is Kobayashi-Maskawa phase, stop contributions to the up quark CEDM is severely constrained even if all SUSY breaking parameters and Higgsino mass parameter μ are real. In this model, real up Yukawa couplings are realized at the GUT scale because of spontaneous CP violation. Therefore CEDM bounds are satisfied, although up Yukawa couplings are complex at the SUSY scale through the renormalization equation group effects. We calculated the CEDMs and found that EDM constraints can be satisfied even if stop mass is 𝓞(1) TeV. In addition, we investigate the size of D-terms in this model. Since these D-term contributions is flavor dependent, the degeneracy of sfermion mass spectrum is destroyed and the size of D-term is strongly constrained by FCNCs when SUSY breaking scale is the weak scale. However, SUSY breaking scale is larger than 1 TeV in order to obtain 125 GeV Higgs mass, and therefore sizable D-term contribution is allowed. Furthermore, we obtained the non-trivial prediction for the difference of squared sfermion mass.
Constraining holographic technicolor
Energy Technology Data Exchange (ETDEWEB)
Levkov, D.G., E-mail: levkov@ms2.inr.ac.ru [Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary Prospect 7a, Moscow 117312 (Russian Federation); Rubakov, V.A. [Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary Prospect 7a, Moscow 117312 (Russian Federation); Physics Department, Moscow State University, Vorobjevy Gory, Moscow 119991 (Russian Federation); Troitsky, S.V.; Zenkevich, Y.A. [Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary Prospect 7a, Moscow 117312 (Russian Federation)
2012-09-19
We obtain a new bound on the value of Peskin-Takeuchi S parameter in a wide class of bottom-up holographic models for technicolor. Namely, we show that weakly coupled holographic description in these models implies S Much-Greater-Than 0.2. Our bound is in conflict with the results of electroweak precision measurements, so it strongly disfavors the models we consider.
Human models of migraine - short-term pain for long-term gain.
Ashina, Messoud; Hansen, Jakob Møller; Á Dunga, Bára Oladóttir; Olesen, Jes
2017-12-01
Migraine is a complex disorder characterized by recurrent episodes of headache, and is one of the most prevalent and disabling neurological disorders. A key feature of migraine is that various factors can trigger an attack, and this phenomenon provides a unique opportunity to investigate disease mechanisms by experimentally inducing migraine attacks. In this Review, we summarize the existing experimental models of migraine in humans, including those that exploit nitric oxide, histamine, neuropeptide and prostaglandin signalling. We describe the development and use of these models in the discovery of molecular pathways that are responsible for initiation of migraine attacks. Combining experimental human models with advanced imaging techniques might help to identify biomarkers of migraine, and in the ongoing search for new and better migraine treatments, human models will have a key role in the discovery of future targets for more-specific and more-effective mechanism-based antimigraine drugs.
Modelling and planning urban mobility on long term by age-cohort model
Directory of Open Access Journals (Sweden)
Krakutovski Zoran
2017-01-01
Full Text Available The modelling and planning of urban mobility on long term is a very complex challenge. The principal sources for analysis of urban mobility are surveys made on particular period of time, usually every ten years. If there are minima two surveys carried out on different period it is possible to make a pseudo-longitudinal data using demographic variables as an age and generation. The temporal modifications of behaviour of population concerning the practice of urban daily mobility are possible to assess using a pseudo-longitudinal data. The decomposition of temporal effects into an effect of age and an effect of generation (cohort makes possible to draw the sample profile during the life cycle and to estimate its temporal deformations. This is the origin of the “age-cohort” model to forecast the urban mobility on long term. The analysis and investigated data from three surveys of urban mobility are related to the urban area Lille in France.
An autonomous vehicle: Constrained test and evaluation
Griswold, Norman C.
1991-11-01
The objective of the research is to develop an autonomous vehicle which utilizes stereo camera sensors (using ambient light) to follow complex paths at speeds up to 35 mph with consideration of moving vehicles within the path. The task is intended to demonstrate the contribution to safety of a vehicle under automatic control. All of the long-term scenarios investigating future reduction in congestion involve an automatic system taking control, or partial control, of the vehicle. A vehicle which includes a collision avoidance system is a prerequisite to an automatic control system. The report outlines the results of a constrained test of a vision controlled vehicle. In order to demonstrate its ability to perform on the current street system the vehicle was constrained to recognize, approach, and stop at an ordinary roadside stop sign.
Doubly Constrained Robust Blind Beamforming Algorithm
Directory of Open Access Journals (Sweden)
Xin Song
2013-01-01
Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Miller, M. M.; Shirzaei, M.
2015-12-01
Poroelastic theory suggests that long-term aquifer deformation is linearly proportional to changes in pore pressure. Land subsidence is the surface expression of deformation occurring at depth that is observed with dense, detailed, and high precision interferometric SAR data. In earlier work, Miller & Shirzaei [2015] identified zones of subsidence and uplift across the Phoenix valley caused by pumping and artificial recharge operations. we combined ascending and descending Envisat InSAR datasets to estimate vertical and horizontal displacement time series from 2003-2010. Next, wavelet decomposition was used to extract and compare the elastic components of vertical deformation and hydraulic head data to estimate aquifer storage coefficients. In the following, we present the results from elastic aquifer modeling using a 3D array of triangular dislocations, extending from depth of 0.5 to 3.5 km. We employ a time-dependent modeling scheme to invert the InSAR displacement time series, solving for the spatiotemporal distribution of the aquifer-aquitard compaction. Such models are used to calculate strain and stress fields and forecast the location of extensional cracks and earth fissures, useful for urban planning and management. Later, applying the framework suggested by Burbey [1999], the optimum compaction model is used to estimate the 3D distribution of hydraulic conductivities as a function of time. These estimates are verified using in-situ and laboratory observations and provide unique evidence to investigate the stress-dependence of the hydraulic conductivity and its variations due to pumping, recharge, and injection. The estimates will also be used in groundwater flow models, enhancing water management in the valley and elsewhere. References Burby, T. J. (1999), Effects of horizontal strain in estimating specific storage and compaction in confined and leaky aquifer systems, Hydrogeology Journal, 7(6), 521-532, doi:10.1007/s100400050225. Miller, M. M., and M
Multiplicative noise removal using variable splitting and constrained optimization.
Bioucas-Dias, José M; Figueiredo, Mário A T
2010-07-01
Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.
Lightweight cryptography for constrained devices
DEFF Research Database (Denmark)
Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco
2014-01-01
Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....
New Exact Penalty Functions for Nonlinear Constrained Optimization Problems
Directory of Open Access Journals (Sweden)
Bingzhuang Liu
2014-01-01
Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.
A Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success
Luong, Ming; Stevens, Jeff
2015-01-01
The Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success, a theoretical stages-of-growth model, explains long-term success in IT outsourcing relationships. Research showed the IT outsourcing relationship life cycle consists of four distinct, sequential stages: contract, transition, support, and partnership. The model was…
Testing Affine Term Structure Models in Case of Transaction Costs
Driessen, J.J.A.G.; Melenberg, B.; Nijman, T.E.
1999-01-01
In this paper we empirically analyze the impact of transaction costs on the performance of affine interest rate models. We test the implied (no arbitrage) Euler restrictions, and we calculate the specification error bound of Hansen and Jagannathan to measure the extent to which a model is
Long-Term Calculations with Large Air Pollution Models
DEFF Research Database (Denmark)
Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.
1999-01-01
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Dynamic Hybrid Model for Short-Term Electricity Price Forecasting
Marin Cerjan; Marin Matijaš; Marko Delimar
2014-01-01
Accurate forecasting tools are essential in the operation of electric power systems, especially in deregulated electricity markets. Electricity price forecasting is necessary for all market participants to optimize their portfolios. In this paper we propose a hybrid method approach for short-term hourly electricity price forecasting. The paper combines statistical techniques for pre-processing of data and a multi-layer (MLP) neural network for forecasting electricity price and price spike det...
A fuzzy inference model for short-term load forecasting
International Nuclear Information System (INIS)
Mamlook, Rustum; Badran, Omar; Abdulhadi, Emad
2009-01-01
This paper is concerned with the short-term load forecasting (STLF) in power system operations. It provides load prediction for generation scheduling and unit commitment decisions, and therefore precise load forecasting plays an important role in reducing the generation cost and the spinning reserve capacity. Short-term electricity demand forecasting (i.e., the prediction of hourly loads (demand)) is one of the most important tools by which an electric utility/company plans, dispatches the loading of generating units in order to meet system demand. The accuracy of the dispatching system, which is derived from the accuracy of the forecasting algorithm used, will determine the economics of the operation of the power system. The inaccuracy or large error in the forecast simply means that load matching is not optimized and consequently the generation and transmission systems are not being operated in an efficient manner. In the present study, a proposed methodology has been introduced to decrease the forecasted error and the processing time by using fuzzy logic controller on an hourly base. Therefore, it predicts the effect of different conditional parameters (i.e., weather, time, historical data, and random disturbances) on load forecasting in terms of fuzzy sets during the generation process. These parameters are chosen with respect to their priority and importance. The forecasted values obtained by fuzzy method were compared with the conventionally forecasted ones. The results showed that the STLF of the fuzzy implementation have more accuracy and better outcomes
Thin-shell wormholes constrained by cosmological observations
Wang, Deng; Meng, Xin-He
2017-01-01
We investigate the thin-shell wormholes constrained by cosmological observations for the first time in the literature. Without loss of generality, we study the thin-shell wormholes in $\\omega$CDM model and analyze their stability under perturbations preserving the symmetry. Firstly, we constrain the $\\omega$CDM model using a combination of Union 2.1 SNe Ia data, the latest $H(z)$ data and CMB data. Secondly, we use the constrained dark energy equation of state (EoS) $\\omega$ which lies in $[-...
Linearly constrained minimax optimization
DEFF Research Database (Denmark)
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...... are solved in the minimax sense subject to the linear constraints. This ensures a feasible-point algorithm. Further, we introduce local bounds on the solutions of the linear subproblems, the bounds being adjusted automatically, depending on the quality of the linear approximations. It is proved...... that the algorithm will always converge to the set of stationary points of the problem, a stationary point being defined in terms of the generalized gradients of the minimax objective function. It is further proved that, under mild regularity conditions, the algorithm is identical to a quadratically convergent...
Constrained Minimization Algorithms
Lantéri, H.; Theys, C.; Richard, C.
2013-03-01
In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.
Slope constrained Topology Optimization
DEFF Research Database (Denmark)
Petersson, J.; Sigmund, Ole
1998-01-01
The problem of minimum compliance topology optimization of an elastic continuum is considered. A general continuous density-energy relation is assumed, including variable thickness sheet models and artificial power laws. To ensure existence of solutions, the design set is restricted by enforcing...
Quantity Constrained General Equilibrium
Babenko, R.; Talman, A.J.J.
2006-01-01
In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In
Modelling Tradescantia fluminensis to assess long term survival
Directory of Open Access Journals (Sweden)
Alex James
2015-06-01
Full Text Available We present a simple Poisson process model for the growth of Tradescantia fluminensis, an invasive plant species that inhibits the regeneration of native forest remnants in New Zealand. The model was parameterised with data derived from field experiments in New Zealand and then verified with independent data. The model gave good predictions which showed that its underlying assumptions are sound. However, this simple model had less predictive power for outputs based on variance suggesting that some assumptions were lacking. Therefore, we extended the model to include higher variability between plants thereby improving its predictions. This high variance model suggests that control measures that promote node death at the base of the plant or restrict the main stem growth rate will be more effective than those that reduce the number of branching events. The extended model forms a good basis for assessing the efficacy of various forms of control of this weed, including the recently-released leaf-feeding tradescantia leaf beetle (Neolema ogloblini.
Long-term consequences of a short-term hypergravity load in a snail model
Martynova, Marina G.; Shabelnikov, Sergej V.; Bystrova, Olga A.
2015-07-01
Here we focused on the dynamic processes in the snail at different time after short-term hypergravity load (STHL) by monitoring the state of neuroendocrine and immune systems, the nucleic acid synthesis levels in the atrial cells, and the behaviour of the atrial granular cells (GCs). We observed that immediately after centrifugation (14 g for 15 min) in the snail haemolymph concentration of dopamine and noradrenaline (measured by high-performance liquid chromatography) and the number of circulating haemocytes and their proliferative activity (estimated by the direct cell counting and [3H]thymidine incorporation, respectively) increased significantly, whereas the concentration of adrenaline decreased. Twenty-four hours after STHL, the levels of catecholamines and haemocytes returned to their control values. In the atrial epicardial and endothelial cells, a notable drop of transcription activity (evaluated by [3H]uridine autoradiography) from the baseline in the immediate post-STHL period was followed by its gradual increase reaching a maximum at the day 5 and subsequent decrease to control value by the day 10. In endothelial cells, DNA-synthesizing activity (evaluated by [3H]thymidine autoradiography) equal to zero before and just after STHL, increased significantly at the day 5, and decreased by the day 10. The atrial GCs underwent total degranulation. Formed as a result small ungranulated cells exhibited DNA synthesis. Afterwards, most probably, the GCs divided and regranulated. One month after STHL the GC population had been restored. Overall, STHL has triggered an immediate reaction of the neuroendocrine and immune systems and initiated long-lasting processes at a cellular level, which included alterations in activity of nucleic acid syntheses in the epicardial and endothelial cells and remodelling of the GC population in the atrium.
Financing institutional long-term care for the elderly in China: a policy evaluation of new models.
Yang, Wei; Jingwei He, Alex; Fang, Lijie; Mossialos, Elias
2016-12-01
A rapid ageing population coupled with changes in family structure has brought about profound implications to social policy in China. Although the past decade has seen a steady increase in public funding to long-term care (LTC), the narrow financing base and vast population have created significant unmet demand, calling for reforms in financing. This paper focuses on the financing of institutional LTC care by examining new models that have emerged from local policy experiments against two policy goals: equity and efficiency. Three emerging models are explored: Social Health Insurance (SHI) in Shanghai, LTC Nursing Insurance (LTCNI) in Qingdao and a means-tested model in Nanjing. A focused systematic narrative review of academic and grey literature is conducted to identify and assess these models, supplemented with qualitative interviews with government officials from relevant departments, care home staff and service users. This paper argues that, although SHI appears to be a convenient solution to fund LTC, this model has led to systematic bias in affordable access among participants of different insurance schemes, and has created a powerful incentive for the over-provision of unnecessary services. The means-tested method has been remarkably constrained by narrow eligibility and insufficiency of funding resources. The LTCNI model is by far the most desirable policy option among the three studied here, but the narrow definition of eligibility has substantively excluded a large proportion of elders in need from access to care, which needs to be addressed in future reforms. This paper proposes three lines of LTC financing reforms for policy-makers: (1) the establishment of a prepaid financing mechanism pooled specifically for LTC costs; (2) the incorporation of more stringent eligibility rules and needs assessment; and (3) reforming the dominant fee-for-service methods in paying LTC service providers. © The Author 2016. Published by Oxford University Press in
Constrained information maximization by free energy minimization
Kamimura, Ryotaro
2011-10-01
In this paper we introduce free energy-based methods to constrain mutual information maximization, developed to realize competitive learning. The new method is introduced to simplify the computational procedures of mutual information and to improve the fidelity of representation and to stabilize learning. First, the free energy is effective in simplifying the computation procedures of mutual information because we need not directly compute mutual information, which needs heavy computation, but only deals with partition functions. With partition functions, computational complexity is significantly reduced. Second, fidelity to input patterns can be improved because training errors between input patterns and connection weights are implicitly incorporated. This means that mutual information is maximized under the constraint of the errors between input patterns and connection weights. Finally, learning can be stabilized in our approach. One of the problems of the free energy approach is that learning processes should be carefully controlled to keep its stability. The present paper shows that the conventional computational techniques in the field of self-organizing maps are really effective in controlling the processes. In particular, minimum information production learning can be used further to stabilize learning by decreasing information obtained at each learning step as much as possible. Thus, we can expect that our new method can be used to increase mutual information between competitive units and input patterns without decreasing errors between input patterns and connection weights and with stabilized learning processes. We applied the free energy-based models to the well-known Iris problem and a student survey, and succeeded in improving the performance in terms of classification rates. In addition, the minimum information production learning turned out to be effective in stabilizing learning.
Solutions of several coupled discrete models in terms of Lamé ...
Indian Academy of Sciences (India)
The models discussed are: coupled Salerno model,; coupled Ablowitz–Ladik model,; coupled 4 model and; coupled 6 model. In all these cases we show that the coefﬁcients of the Lamé polynomials are such that the Lamé polynomials can be re-expressed in terms of Chebyshev polynomials of the relevant Jacobi elliptic ...
Location constrained resource interconnection
International Nuclear Information System (INIS)
Hawkins, D.
2008-01-01
This presentation discussed issues related to wind integration from the perspective of the California Independent System Operator (ISO). Issues related to transmission, reliability, and forecasting were reviewed. Renewable energy sources currently used by the ISO were listed, and details of a new transmission financing plan designed to address the location constraints of renewable energy sources and provide for new transmission infrastructure was presented. The financing mechanism will be financed by participating transmission owners through revenue requirements. New transmission interconnections will include network facilities and generator tie-lines. Tariff revisions have also been implemented to recover the costs of new facilities and generators. The new transmission project will permit wholesale transmission access to areas where there are significant energy resources that are not transportable. A rate impact cap of 15 per cent will be imposed on transmission owners to mitigate short-term costs to ratepayers. The presentation also outlined energy resource area designation plans, renewable energy forecasts, and new wind technologies. Ramping issues were also discussed. It was concluded that the ISO expects to ensure that 20 per cent of its energy will be derived from renewable energy sources. tabs., figs
Error analysis of short term wind power prediction models
International Nuclear Information System (INIS)
De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco
2011-01-01
The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)
Risk Modeling Approaches in Terms of Volatility Banking Transactions
Directory of Open Access Journals (Sweden)
Angelica Cucşa (Stratulat
2016-01-01
Full Text Available The inseparability of risk and banking activity is one demonstrated ever since banking systems, the importance of the topic being presend in current life and future equally in the development of banking sector. Banking sector development is done in the context of the constraints of nature and number of existing risks and those that may arise, and serves as limiting the risk of banking activity. We intend to develop approaches to analyse risk through mathematical models by also developing a model for the Romanian capital market 10 active trading picks that will test investor reaction in controlled and uncontrolled conditions of risk aggregated with harmonised factors.
A generalized one-factor term structure model and pricing of interest rate derivative securities
Jiang, George J.
1997-01-01
The purpose of this paper is to propose a nonparametric interest rate term structure model and investigate its implications on term structure dynamics and prices of interest rate derivative securities. The nonparametric spot interest rate process is estimated from the observed short-term interest
A new model integrating short- and long-term aging of copper added to soils.
Directory of Open Access Journals (Sweden)
Saiqi Zeng
Full Text Available Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2, and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt. Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors-soil pH, incubation time, soil organic matter content and temperature.
EDM - A model for optimising the short-term power operation of a complex hydroelectric network
International Nuclear Information System (INIS)
Tremblay, M.; Guillaud, C.
1996-01-01
In order to optimize the short-term power operation of a complex hydroelectric network, a new model called EDM was added to PROSPER, a water management analysis system developed by SNC-Lavalin. PROSPER is now divided into three parts: an optimization model (DDDP), a simulation model (ESOLIN), and an economic dispatch model (EDM) for the short-term operation. The operation of the KSEB hydroelectric system (located in southern India) with PROSPER was described. The long-term analysis with monthly time steps is assisted by the DDDP, and the daily analysis with hourly or half-hourly time steps is performed with the EDM model. 3 figs
Term structure of sovereign spreads: a contingent claim model
Directory of Open Access Journals (Sweden)
Katia Rocha
2007-12-01
Full Text Available This paper proposes a simple structural model to estimate the termstructure and the implied default probability of a selected group of emerging countries, which account for 54% of the JPMorgan EMBIG index on average for the period 2000-2005. The real exchange rate dynamic, modeled as a pure diffusion process, is assumed to trigger default. The calibrated model generates sovereign spread curves consistent to market data. The results suggest that the market is systematically overpricing spreads for Brazil in 100 basis points, whereas for Mexico, Russia and Turkey the model is able to reproduce the market behavior.Este trabalho propõe um modelo estrutural para estimar a estrutura a termo e a probabilidade implícita de default de países emergentes que representam, em média, 54% do índice EMBIG do JPMorgan no período de 2000-2005. A taxa de câmbio real, modelada como um processo de difusão simples, é considerada como indicativa de default. O modelo calibrado gera a estrutura a termo dos spreads consistente com dados de mercado, indicando que o mercado sistematicamente sobre-estima os spreads para o Brasil em 100 pontos base na média, enquanto para México, Rússia e Turquia reproduz o comportamento do mercado.
Evaluation of nourishment schemes based on long-term morphological modeling
DEFF Research Database (Denmark)
Grunnet, Nicholas; Kristensen, Sten Esbjørn; Drønen, Nils
2012-01-01
A recently developed long-term morphological modeling concept is applied to evaluate the impact of nourishment schemes. The concept combines detailed two-dimensional morphological models and simple one-line models for the coastline evolution and is particularly well suited for long-term simulatio...... site. This study strongly indicates that the hybrid model may be used as an engineering tool to predict shoreline response following the implementation of a nourishment project....
Synthesizing long-term sea level rise projections – the MAGICC sea level model v2.0
Directory of Open Access Journals (Sweden)
A. Nauels
2017-06-01
Full Text Available Sea level rise (SLR is one of the major impacts of global warming; it will threaten coastal populations, infrastructure, and ecosystems around the globe in coming centuries. Well-constrained sea level projections are needed to estimate future losses from SLR and benefits of climate protection and adaptation. Process-based models that are designed to resolve the underlying physics of individual sea level drivers form the basis for state-of-the-art sea level projections. However, associated computational costs allow for only a small number of simulations based on selected scenarios that often vary for different sea level components. This approach does not sufficiently support sea level impact science and climate policy analysis, which require a sea level projection methodology that is flexible with regard to the climate scenario yet comprehensive and bound by the physical constraints provided by process-based models. To fill this gap, we present a sea level model that emulates global-mean long-term process-based model projections for all major sea level components. Thermal expansion estimates are calculated with the hemispheric upwelling-diffusion ocean component of the simple carbon-cycle climate model MAGICC, which has been updated and calibrated against CMIP5 ocean temperature profiles and thermal expansion data. Global glacier contributions are estimated based on a parameterization constrained by transient and equilibrium process-based projections. Sea level contribution estimates for Greenland and Antarctic ice sheets are derived from surface mass balance and solid ice discharge parameterizations reproducing current output from ice-sheet models. The land water storage component replicates recent hydrological modeling results. For 2100, we project 0.35 to 0.56 m (66 % range total SLR based on the RCP2.6 scenario, 0.45 to 0.67 m for RCP4.5, 0.46 to 0.71 m for RCP6.0, and 0.65 to 0.97 m for RCP8.5. These projections lie within the
Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.
2015-12-01
We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
Is oil consumption constrained by industrial structure? Evidence from China
Jia, Y. Q.; Duan, H. M.
2017-08-01
This paper examines whether oil consumption is constrained by output value, applying a cointegration test and an ECM to the primary, secondary, and tertiary sectors in China during 1985-2013. The empirical results indicate that oil consumption in China is constrained by the industrial structure both in the short run and in the long run. Regardless of the time horizon considered, the oil consumption constraint is the lowest for the primary sector as well as the highest for the tertiary sector. This is because the long-term industrial structure formation and the technological level of each sector underlines the existence of long run equilibrium and short run fluctuations of output value and oil consumption, with the latter being constrained by adjustments in industrial structure. In order to decrease the constraining effect of output value on oil consumption, the government should take some measures to improve the utilization rate, reducing the intensity of oil consumption, and secure the supply of oil.
CHALLENGES IN SOURCE TERM MODELING OF DECONTAMINATION AND DECOMMISSIONING WASTES.
Energy Technology Data Exchange (ETDEWEB)
SULLIVAN, T.M.
2006-08-01
Development of real-time predictive modeling to identify the dispersion and/or source(s) of airborne weapons of mass destruction including chemical, biological, radiological, and nuclear material in urban environments is needed to improve response to potential releases of these materials via either terrorist or accidental means. These models will also prove useful in defining airborne pollution dispersion in urban environments for pollution management/abatement programs. Predicting gas flow in an urban setting on a scale of less than a few kilometers is a complicated and challenging task due to the irregular flow paths that occur along streets and alleys and around buildings of different sizes and shapes, i.e., ''urban canyons''. In addition, air exchange between the outside and buildings and subway areas further complicate the situation. Transport models that are used to predict dispersion of WMD/CBRN materials or to back track the source of the release require high-density data and need defensible parameterizations of urban processes. Errors in the data or any of the parameter inputs or assumptions will lead to misidentification of the airborne spread or source release location(s). The need for these models to provide output in a real-time fashion if they are to be useful for emergency response provides another challenge. To improve the ability of New York City's (NYC's) emergency management teams and first response personnel to protect the public during releases of hazardous materials, the New York City Urban Dispersion Program (UDP) has been initiated. This is a four year research program being conducted from 2004 through 2007. This paper will discuss ground level and subway Perfluorocarbon tracer (PFT) release studies conducted in New York City. The studies released multiple tracers to study ground level and vertical transport of contaminants. This paper will discuss the results from these tests and how these results can be used
Shimi, Andria; Scerif, Gaia
2017-12-01
Over the past decades there has been a surge of research aiming to shed light on the nature of capacity limits to visual short-term memory (VSTM). However, an integrative account of this evidence is currently missing. We argue that investigating parameters constraining VSTM in childhood suggests a novel integrative model of VSTM maintenance, and that this in turn informs mechanisms of VSTM maintenance in adulthood. Over 3 experiments with 7-year-olds and young adults (total N=206), we provide evidence for multiple cognitive processes interacting to constrain VSTM performance. While age-related increases in storage capacity are undisputable, we replicate the finding that attentional processes control what information will be encoded and maintained in VSTM in the face of increased competition. Therefore, a central process to the current model is attentional refreshment, a mechanism that it is thought to reactivate and strengthen the signal of the visual representations. Critically, here we also show that attentional influences on VSTM are further constrained by additional factors, traditionally studied to the exclusion of each other, such as memory load and temporal decay. We propose that these processes work synergistically in an elegant manner to capture the adult-end state, whereas their less refined efficiency and modulations in childhood account for the smaller VSTM capacity that 7-year-olds demonstrate compared to older individuals. We conclude that going beyond the investigation of single cognitive mechanisms, to their interactions, holds the promise to understand both developing and fully developed maintenance in VSTM. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Pavluchenko, Sergey A. [Universidade Federal do Maranhao (UFMA), Programa de Pos-Graduacao em Fisica, Sao Luis, Maranhao (Brazil)
2017-08-15
In this paper we perform a systematic study of spatially flat [(3+D)+1]-dimensional Einstein-Gauss-Bonnet cosmological models with Λ-term. We consider models that topologically are the product of two flat isotropic subspaces with different scale factors. One of these subspaces is three-dimensional and represents our space and the other is D-dimensional and represents extra dimensions. We consider no ansatz of the scale factors, which makes our results quite general. With both Einstein-Hilbert and Gauss-Bonnet contributions in play, D = 3 and the general D ≥ 4 cases have slightly different dynamics due to the different structure of the equations of motion. We analytically study the equations of motion in both cases and describe all possible regimes with special interest on the realistic regimes. Our analysis suggests that the only realistic regime is the transition from high-energy (Gauss-Bonnet) Kasner regime, which is the standard cosmological singularity in that case, to the anisotropic exponential regime with expanding three and contracting extra dimensions. Availability of this regime allows us to put a constraint on the value of Gauss-Bonnet coupling α and the Λ-term - this regime appears in two regions on the (α, Λ) plane: α < 0, Λ > 0, αΛ ≤ -3/2 and α > 0, αΛ ≤ (3D{sup 2} - 7D + 6)/(4D(D-1)), including the entire Λ < 0 region. The obtained bounds are confronted with the restrictions on α and Λ from other considerations, like causality, entropy-to-viscosity ratio in AdS/CFT and others. Joint analysis constrains (α, Λ) even further: α > 0, D ≥ 2 with (3D{sup 2} - 7D + 6)/(4D(D-1)) ≥ αΛ ≥ -(D+2)(D+3)(D{sup 2} + 5D + 12)/(8(D{sup 2} + 3D + 6){sup 2}). (orig.)
Uncertainty Assessment in Long Term Urban Drainage Modelling
DEFF Research Database (Denmark)
Thorndahl, Søren
on the rainfall inputs. In order to handle the uncertainties three different stochastic approaches are investigated applying a case catchment in the town Frejlev: (1) a reliability approach in which a parameterization of the rainfall input is conducted in order to generate synthetic rainfall events and find...... return periods, and even within the return periods specified in the design criteria. If urban drainage models are based on standard parameters and hence not calibrated, the uncertainties are even larger. The greatest uncertainties are shown to be the rainfall input and the assessment of the contributing...