WorldWideScience

Sample records for model termed constrained

  1. A Constrained Standard Model: Effects of Fayet-Iliopoulos Terms

    International Nuclear Information System (INIS)

    Barbieri, Riccardo; Hall, Lawrence J.; Nomura, Yasunori

    2001-01-01

    In (1)the one Higgs doublet standard model was obtained by an orbifold projection of a 5D supersymmetric theory in an essentially unique way, resulting in a prediction for the Higgs mass m H = 127 +- 8 GeV and for the compactification scale 1/R = 370 +- 70 GeV. The dominant one loop contribution to the Higgs potential was found to be finite, while the above uncertainties arose from quadratically divergent brane Z factors and from other higher loop contributions. In (3), a quadratically divergent Fayet-Iliopoulos term was found at one loop in this theory. We show that the resulting uncertainties in the predictions for the Higgs boson mass and the compactification scale are small, about 25percent of the uncertainties quoted above, and hence do not affect the original predictions. However, a tree level brane Fayet-Iliopoulos term could, if large enough, modify these predictions, especially for 1/R.

  2. Constrained CPn models

    International Nuclear Information System (INIS)

    Latorre, J.I.; Luetken, C.A.

    1988-11-01

    We construct a large new class of two dimensional sigma models with Kaehler target spaces which are algebraic manifolds realized as complete interactions in weighted CP n spaces. They are N=2 superconformally symmetric and particular choices of constraints give Calabi-Yau target spaces which are nontrivial string vacua. (orig.)

  3. Parametrization consequences of constraining soil organic matter models by total carbon and radiocarbon using long-term field data

    Science.gov (United States)

    Menichetti, Lorenzo; Kätterer, Thomas; Leifeld, Jens

    2016-05-01

    Soil organic carbon (SOC) dynamics result from different interacting processes and controls on spatial scales from sub-aggregate to pedon to the whole ecosystem. These complex dynamics are translated into models as abundant degrees of freedom. This high number of not directly measurable variables and, on the other hand, very limited data at disposal result in equifinality and parameter uncertainty. Carbon radioisotope measurements are a proxy for SOC age both at annual to decadal (bomb peak based) and centennial to millennial timescales (radio decay based), and thus can be used in addition to total organic C for constraining SOC models. By considering this additional information, uncertainties in model structure and parameters may be reduced. To test this hypothesis we studied SOC dynamics and their defining kinetic parameters in the Zürich Organic Fertilization Experiment (ZOFE) experiment, a > 60-year-old controlled cropland experiment in Switzerland, by utilizing SOC and SO14C time series. To represent different processes we applied five model structures, all stemming from a simple mother model (Introductory Carbon Balance Model - ICBM): (I) two decomposing pools, (II) an inert pool added, (III) three decomposing pools, (IV) two decomposing pools with a substrate control feedback on decomposition, (V) as IV but with also an inert pool. These structures were extended to explicitly represent total SOC and 14C pools. The use of different model structures allowed us to explore model structural uncertainty and the impact of 14C on kinetic parameters. We considered parameter uncertainty by calibrating in a formal Bayesian framework. By varying the relative importance of total SOC and SO14C data in the calibration, we could quantify the effect of the information from these two data streams on estimated model parameters. The weighing of the two data streams was crucial for determining model outcomes, and we suggest including it in future modeling efforts whenever SO14C

  4. Use of natural analog and modeling studies to constrain the effects of magmatic activity on long-term geologic repositories

    International Nuclear Information System (INIS)

    Valentine, G.A.; Rosenberg, N.D.; Crowe, B.M.; Perry, F.V.

    1995-01-01

    Examples of the application of natural-analog studies to the estimation of the consequences of a volcanic eruption penetrating a radioactive waste repository are given, including the criteria for analog selection and new data from ongoing studies. Examples of early modeling results focusing on the spatial and temporal scale of subsurface processes are also provided. All of these examples are taken from studies of the potential Yucca Mountain repository, Nevada, but similar approaches could be applied in other areas. In addition, studies of subsurface processes initiated by magmatic events serve as useful analogs for repository thermal loading studies

  5. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  6. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  7. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  8. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  9. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  10. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  11. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  12. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  13. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  14. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  15. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  16. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  17. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  18. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  19. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  20. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  1. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  2. Constraining composite Higgs models using LHC data

    Science.gov (United States)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  3. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  4. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  5. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  6. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  7. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  8. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  9. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei

    2015-01-01

    Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....

  10. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  11. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  12. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  13. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  14. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  15. Constraining viscous dark energy models with the latest cosmological data

    Science.gov (United States)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  16. Constraining viscous dark energy models with the latest cosmological data

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Yan, Yang-Jie; Meng, Xin-He [Nankai University, Department of Physics, Tianjin (China)

    2017-10-15

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H{sub 0} tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios. (orig.)

  17. A constrained rasch model of trace redintegration in serial recall.

    Science.gov (United States)

    Roodenrys, Steven; Miller, Leonie M

    2008-04-01

    The notion that verbal short-term memory tasks, such as serial recall, make use of information in long-term as well as in short-term memory is instantiated in many models of these tasks. Such models incorporate a process in which degraded traces retrieved from a short-term store are reconstructed, or redintegrated (Schweickert, 1993), through the use of information in long-term memory. This article presents a conceptual and mathematical model of this process based on a class of item-response theory models. It is demonstrated that this model provides a better fit to three sets of data than does the multinomial processing tree model of redintegration (Schweickert, 1993) and that a number of conceptual accounts of serial recall can be related to the parameters of the model.

  18. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  19. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  20. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...

  1. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  2. Gluon field strength correlation functions within a constrained instanton model

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Esaibegyan, S.V.; Maximov, A.E.; Mikhailov, S.V.

    2000-01-01

    We suggest a constrained instanton (CI) solution in the physical QCD vacuum which is described by large-scale vacuum field fluctuations. This solution decays exponentially at large distances. It is stable only if the interaction of the instanton with the background vacuum field is small and additional constraints are introduced. The CI solution is explicitly constructed in the ansatz form, and the two-point vacuum correlator of the gluon field strengths is calculated in the framework of the effective instanton vacuum model. At small distances the results are qualitatively similar to the single instanton case; in particular, the D 1 invariant structure is small, which is in agreement with the lattice calculations. (orig.)

  3. Neuroticism and conscientiousness respectively constrain and facilitate short-term plasticity within the working memory neural network.

    Science.gov (United States)

    Dima, Danai; Friston, Karl J; Stephan, Klaas E; Frangou, Sophia

    2015-10-01

    Individual differences in cognitive efficiency, particularly in relation to working memory (WM), have been associated both with personality dimensions that reflect enduring regularities in brain configuration, and with short-term neural plasticity, that reflects task-related changes in brain connectivity. To elucidate the relationship of these two divergent mechanisms, we tested the hypothesis that personality dimensions, which reflect enduring aspects of brain configuration, inform about the neurobiological framework within which short-term, task-related plasticity, as measured by effective connectivity, can be facilitated or constrained. As WM consistently engages the dorsolateral prefrontal (DLPFC), parietal (PAR), and anterior cingulate cortex (ACC), we specified a WM network model with bidirectional, ipsilateral, and contralateral connections between these regions from a functional magnetic resonance imaging dataset obtained from 40 healthy adults while performing the 3-back WM task. Task-related effective connectivity changes within this network were estimated using Dynamic Causal Modelling. Personality was evaluated along the major dimensions of Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. Only two dimensions were relevant to task-dependent effective connectivity. Neuroticism and Conscientiousness respectively constrained and facilitated neuroplastic responses within the WM network. These results suggest individual differences in cognitive efficiency arise from the interplay between enduring and short-term plasticity in brain configuration. © 2015 Wiley Periodicals, Inc.

  4. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  5. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  6. Criticisms and defences of the balance-of-payments constrained growth model: some old, some new

    Directory of Open Access Journals (Sweden)

    John S.L. McCombie

    2011-12-01

    Full Text Available This paper assesses various critiques that have been levelled over the years against Thirlwall’s Law and the balance-of-payments constrained growth model. It starts by assessing the criticisms that the law is largely capturing an identity; that the law of one price renders the model incoherent; and that statistical testing using cross-country data rejects the hypothesis that the actual and the balance-of-payments equilibrium growth rates are the same. It goes on to consider the argument that calculations of the “constant-market-shares” income elasticities of demand for exports demonstrate that the UK (and by implication other advanced countries could not have been balance-of-payments constrained in the early postwar period. Next Krugman’s interpretation of the law (or what he terms the “45-degree rule”, which is at variance with the usual demand-oriented explanation, is examined. The paper next assesses attempts to reconcile the demand and supply side of the model and examines whether or not the balance-of-payments constrained growth model is subject to the fallacy of composition. It concludes that none of these criticisms invalidate the model, which remains a powerful explanation of why growth rates differ.

  7. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  8. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  9. Dark matter in a constrained E6 inspired SUSY model

    International Nuclear Information System (INIS)

    Athron, P.; Harries, D.; Nevzorov, R.; Williams, A.G.

    2016-01-01

    We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.

  10. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  11. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  12. Future sea level rise constrained by observations and long-term commitment

    Science.gov (United States)

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-01-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28–56 cm, 37–77 cm, and 57–131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The “constrained extrapolation” approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  13. Future sea level rise constrained by observations and long-term commitment.

    Science.gov (United States)

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-08

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections.

  14. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  15. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  16. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  17. Attentional control constrains visual short-term memory: Insights from developmental and individual differences

    Science.gov (United States)

    Astle, D.E.; Nobre, A.C.; Scerif, G.

    2014-01-01

    The mechanisms by which attentional control biases mnemonic representations have attracted much interest but remain poorly understood. As attention and memory develop gradually over childhood and variably across individuals, assessing how participants of different ages and ability attend to mnemonic contents can elucidate their interplay. In Experiment 1, 7-, 10-year-olds and adults were asked to report whether a probe item had been part of a previously presented four-item array. The initial array could either be uncued, preceded (“pre-cued”) or followed (“retro-cued”) by a spatial cue orienting attention to one of the potential item locations. Performance across groups was significantly improved by both cue types and individual differences in children’s retrospective attentional control predicted their visual short-term and working memory span, whereas their basic ability to remember in the absence of cues did not. Experiment 2 imposed a variable delay between the array and the subsequent orienting cue. Cueing benefits were greater in adults compared to 10-year-olds, but they persisted even when cues followed the array by nearly 3 seconds, suggesting that orienting operated on durable short-term representations for both age groups. The findings indicate that there are substantial developmental and individual differences in the ability to control attention to memory and that in turn these differences constrain visual short-term memory capacity. PMID:20680889

  18. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  19. Inexact Multistage Stochastic Chance Constrained Programming Model for Water Resources Management under Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2017-01-01

    Full Text Available In order to formulate water allocation schemes under uncertainties in the water resources management systems, an inexact multistage stochastic chance constrained programming (IMSCCP model is proposed. The model integrates stochastic chance constrained programming, multistage stochastic programming, and inexact stochastic programming within a general optimization framework to handle the uncertainties occurring in both constraints and objective. These uncertainties are expressed as probability distributions, interval with multiply distributed stochastic boundaries, dynamic features of the long-term water allocation plans, and so on. Compared with the existing inexact multistage stochastic programming, the IMSCCP can be used to assess more system risks and handle more complicated uncertainties in water resources management systems. The IMSCCP model is applied to a hypothetical case study of water resources management. In order to construct an approximate solution for the model, a hybrid algorithm, which incorporates stochastic simulation, back propagation neural network, and genetic algorithm, is proposed. The results show that the optimal value represents the maximal net system benefit achieved with a given confidence level under chance constraints, and the solutions provide optimal water allocation schemes to multiple users over a multiperiod planning horizon.

  20. Integrating EarthScope Data to Constrain the Long-Term Effects of Tectonism on Continental Lithosphere

    Science.gov (United States)

    Porter, R. C.; van der Lee, S.

    2017-12-01

    One of the most significant products of the EarthScope experiment has been the development of new seismic tomography models that take advantage of the consistent station design, regular 70-km station spacing, and wide aperture of the EarthScope Transportable Array (TA) network. These models have led to the discovery and interpretation of additional compositional, thermal, and density anomalies throughout the continental US, especially within tectonically stable regions. The goal of this work is use data from the EarthScope experiment to better elucidate the temporal relationship between tectonic activity and seismic velocities. To accomplish this, we compile several upper-mantle seismic velocity models from the Incorporated Research Institute for Seismology (IRIS) Earth Model Collaboration (EMC) and compare these to a tectonic age model we compiled using geochemical ages from the Interdisciplinary Earth Data Alliance: EarthChem Database. Results from this work confirms quantitatively that the time elapsed since the most recent tectonic event is a dominant influence on seismic velocities within the upper mantle across North America. To further understand this relationship, we apply mineral-physics models for peridotite to estimate upper-mantle temperatures for the continental US from tomographically imaged shear velocities. This work shows that the relationship between the estimated temperatures and the time elapsed since the most recent tectonic event is broadly consistent with plate cooling models, yet shows intriguing scatter. Ultimately, this work constrains the long-term thermal evolution of continental mantle lithosphere.

  1. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  2. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  3. Adaptively Constrained Stochastic Model Predictive Control for the Optimal Dispatch of Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaogang Guo

    2018-01-01

    Full Text Available In this paper, an adaptively constrained stochastic model predictive control (MPC is proposed to achieve less-conservative coordination between energy storage units and uncertain renewable energy sources (RESs in a microgrid (MG. Besides the economic objective of MG operation, the limits of state-of-charge (SOC and discharging/charging power of the energy storage unit are formulated as chance constraints when accommodating uncertainties of RESs, considering mild violations of these constraints are allowed during long-term operation, and a closed-loop online update strategy is performed to adaptively tighten or relax constraints according to the actual deviation probability of violation level from the desired one as well as the current change rate of deviation probability. Numerical studies show that the proposed adaptively constrained stochastic MPC for MG optimal operation is much less conservative compared with the scenario optimization based robust MPC, and also presents a better convergence performance to the desired constraint violation level than other online update strategies.

  4. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  5. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  6. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  7. Top ten models constrained by b {yields} s{gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, J.L. [Stanford Univ., CA (United States)

    1994-12-01

    The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.

  8. High estimates of supply constrained emissions scenarios for long-term climate risk assessment

    International Nuclear Information System (INIS)

    Ward, James D.; Mohr, Steve H.; Myers, Baden R.; Nel, Willem P.

    2012-01-01

    The simulated effects of anthropogenic global warming have become important in many fields and most models agree that significant impacts are becoming unavoidable in the face of slow action. Improvements to model accuracy rely primarily on the refinement of parameter sensitivities and on plausible future carbon emissions trajectories. Carbon emissions are the leading cause of global warming, yet current considerations of future emissions do not consider structural limits to fossil fuel supply, invoking a wide range of uncertainty. Moreover, outdated assumptions regarding the future abundance of fossil energy could contribute to misleading projections of both economic growth and climate change vulnerability. Here we present an easily replicable mathematical model that considers fundamental supply-side constraints and demonstrate its use in a stochastic analysis to produce a theoretical upper limit to future emissions. The results show a significant reduction in prior uncertainty around projected long term emissions, and even assuming high estimates of all fossil fuel resources and high growth of unconventional production, cumulative emissions tend to align to the current medium emissions scenarios in the second half of this century. This significant finding provides much-needed guidance on developing relevant emissions scenarios for long term climate change impact studies. - Highlights: ► GHG emissions from conventional and unconventional fossil fuels modelled nationally. ► Assuming worst-case: large resource, high growth, rapid uptake of unconventional. ► Long-term cumulative emissions align well with the SRES medium emissions scenario. ► High emissions are unlikely to be sustained through the second half of this century. ► Model designed to be easily extended to test other scenarios e.g. energy shortages.

  9. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  10. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  11. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  12. Constraining new physics with collider measurements of Standard Model signatures

    Energy Technology Data Exchange (ETDEWEB)

    Butterworth, Jonathan M. [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom); Grellscheid, David [IPPP, Department of Physics, Durham University,Durham, DH1 3LE (United Kingdom); Krämer, Michael; Sarrazin, Björn [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany); Yallup, David [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom)

    2017-03-14

    A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, ‘Constraints On New Theories Using Rivet’, CONTUR, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The CONTUR approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The CONTUR method is highly scaleable to other models and future measurements.

  13. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  14. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...

  15. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  16. Toyotarity. Term, model, range

    Directory of Open Access Journals (Sweden)

    Stanisław Borkowski

    2013-04-01

    Full Text Available The Toyotarity and BOST term was presented in the chapter. The BOST method allows to define relations between material resources and human resources and between human resources and human resources (TOYOTARITY. This term was also invented by the Author (and is legally protected. The idea of methodology is an outcome of 12 years of work.

  17. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  18. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  19. Modeling Power-Constrained Optimal Backlight Dimming for Color Displays

    DEFF Research Database (Denmark)

    Burini, Nino; Nadernejad, Ehsan; Korhonen, Jari

    2013-01-01

    In this paper, we present a framework for modeling color liquid crystal displays (LCDs) having local light-emitting diode (LED) backlight with dimming capability. The proposed framework includes critical aspects like leakage, clipping, light diffusion and human perception of luminance and allows...

  20. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  1. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  2. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  3. Risk reserve constrained economic dispatch model with wind power penetration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, W.; Sun, H.; Peng, Y. [Department of Electrical and Electronics Engineering, Dalian University of Technology, Dalian, 116024 (China)

    2010-12-15

    This paper develops a modified economic dispatch (ED) optimization model with wind power penetration. Due to the uncertain nature of wind speed, both overestimation and underestimation of the available wind power are compensated using the up and down spinning reserves. In order to determine both of these two reserve demands, the risk-based up and down spinning reserve constraints are presented considering not only the uncertainty of available wind power, but also the load forecast error and generator outage rates. The predictor-corrector primal-dual interior point method is utilized to solve the proposed ED model. Simulation results of a system with ten conventional generators and one wind farm demonstrate the effectiveness of the proposed method. (authors)

  4. Constraining quantum collapse inflationary models with CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Benetti, Micol; Alcaniz, Jailson S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro, RJ (Brazil); Landau, Susana J., E-mail: micolbenetti@on.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires and IFIBA, CONICET, Ciudad Universitaria, PabI, Buenos Aires 1428 (Argentina)

    2016-12-01

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze the possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.

  5. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  6. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    Science.gov (United States)

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  7. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  8. Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo

    2012-01-01

    Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat...

  9. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  10. Constrained prose recall and the assessment of long-term forgetting: the case of ageing and the Crimes Test.

    Science.gov (United States)

    Baddeley, Alan; Rawlings, Bruce; Hayes, Amie

    2014-01-01

    It has become increasingly clear that some patients with apparently normal memory may subsequently show accelerated long-term forgetting (ALF), with dramatic loss when retested. We describe a constrained prose recall task that attempts to lay the foundations for a test suitable for detecting ALF sensitively and economically. Instead of the usual narrative structure of prose recall tests, it employs a matrix structure involving four episodes, each describing a minor crime, with each crime involving the binding into a coherent episode of a specified range of features, involving the victim, the crime, the criminal and the location, allowing a total of 80 different probed recall questions to be generated. These are used to create four equivalent 20-item tests, three of which are used in the study. After a single verbal presentation, young and elderly participants were tested on three occasions, immediately, and by telephone after a delay of 6 weeks, and at one of a varied range of intermediate points. The groups were approximately matched on immediate test; both showed systematic forgetting which was particularly marked in the elderly. We suggest that constrained prose recall has considerable potential for the study of long-term forgetting.

  11. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  12. Constrained model predictive control for load-following operation of APR reactors

    International Nuclear Information System (INIS)

    Kim, Jae Hwan; Lee, Sim Won; Kim, Ju Hyun; Na, Man Gyun; Yu, Keuk Jong; Kim, Han Gon

    2012-01-01

    The load-following operation of APR+ reactor is needed to control the power effectively using the control rods and to restrain the reactivity control from using the boric acid for flexibility of plant operation. Usually, the reason why the disproportion of axial flux distribution occurs during load-following operation is xenon-induced oscillation. The xenon has a very high absorption cross-section and makes the impact on the reactor delayed by the iodine precursor. The power maneuvering using automatically load-following operation has advantage in terms of safety and economic operation of the reactor, so the controller has to be designed efficiently. Therefore, an advanced control method that meets the conditions such as automatic control, flexibility, safety, and convenience is necessary to load-following operation of APR+ reactor. In this paper, the constrained model predictive control (MPC) method is applied to design APR reactor's automatic load-following controller for the integrated thermal power level and axial shape index (ASI) control. Some controllers use only the current tracking command, but MPC considers future commands in addition to the current tracking command. So, MPC can achieve better tracking performance than others. Furthermore, an MPC is to used in many industrial process control systems. The basic concept of the MPC is to solve an optimization problem for a finite future time interval at present time and to implement the first optimal control input as the current control input. The KISPAC-1D code, which models the APR+ nuclear power plants, is interfaced to the proposed controller to verify the tracking performance of the reactor power level and ASI. It is known that the proposed controller exhibits very fast tracking responses

  13. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    Science.gov (United States)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  14. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    International Nuclear Information System (INIS)

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5–4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data

  15. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  16. Effects of long-term contracts on firms exercising market power in transmission constrained electricity markets

    International Nuclear Information System (INIS)

    Nam, Young Woo; Yoon, Yong Tae; Park, Jong-Keun; Hur, Don; Kim, Sung-Soo

    2006-01-01

    The electricity markets with only few large firms are often vulnerable to less competitive behaviors than the desired. The presence of transmission constraints further restrict the competition among firms and provide more opportunities for firms to exercise market power. While it is generally acknowledged that the long-term contracts provide good measures for mitigating market power in the spot market (thus reducing undesired price spikes), it is not even more clear how effective these contracts are if the market is severely limited due to transmission constraints. In this paper, an analytical approach through finding a Nash equilibrium is presented to investigate the effects of long-term contracts on firms exercising market power in a bid-based pool with transmission constraints. Surprisingly the analysis in this paper shows that the presence of long-term contracts may result in the reduced expected social welfare. A straightforward consequence of the analysis presented in this paper will be helpful for the regulators in Korea to reconsider offering vesting contracts to generating companies in the near future. (author)

  17. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  18. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    International Nuclear Information System (INIS)

    Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.

    2005-01-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  19. Network-constrained Cournot models of liberalized electricity markets. The devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Department of Applied Economics, Sidgwick Ave., University of Cambridge, CB3 9DE (United Kingdom); Barquin, Julian; Vazquez, Miguel [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, c/Santa Cruz de Marcenado 26-28015 Madrid (Spain); Boots, Maroeska G. [Energy Research Centre of the Netherlands ECN, Badhuisweg 3, 1031 CM Amsterdam (Netherlands); Ehrenmann, Andreas [Judge Institute of Management, University of Cambridge, Trumpington Street, CB2 1AG (United Kingdom); Hobbs, Benjamin F. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Rijkers, Fieke A.M. [Contributed while at ECN, now at Nederlandse Mededingingsautoriteit (NMa), Dte, Postbus 16326, 2500 BH Den Haag (Netherlands)

    2005-05-15

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model.

  20. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Cambridge Univ., Dept. of Applied Economics, Cambridge (United Kingdom); Barquin, Julian; Vazquez, Miguel [Universidad Pontificia Comillas, Inst. de Investigacion Tecnologica, Madrid (Spain); Boots, Maroeska; Rijkers, Fieke A.M. [Energy Research Centre of the Netherlands ECN, Amsterdam (Netherlands); Ehrenmann, Andreas [Cambridge Univ., Judge Inst. of Management, Cambridge (United Kingdom); Hobbs, Benjamin F. [Johns Hopkins Univ., Dept. of Geography and Environmental Engineering, Baltimore, MD (United States)

    2005-05-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  1. Constraining Marsh Carbon Budgets Using Long-Term C Burial and Contemporary Atmospheric CO2 Fluxes

    Science.gov (United States)

    Forbrich, I.; Giblin, A. E.; Hopkinson, C. S.

    2018-03-01

    Salt marshes are sinks for atmospheric carbon dioxide that respond to environmental changes related to sea level rise and climate. Here we assess how climatic variations affect marsh-atmosphere exchange of carbon dioxide in the short term and compare it to long-term burial rates based on radiometric dating. The 5 years of atmospheric measurements show a strong interannual variation in atmospheric carbon exchange, varying from -104 to -233 g C m-2 a-1 with a mean of -179 ± 32 g C m-2 a-1. Variation in these annual sums was best explained by differences in rainfall early in the growing season. In the two years with below average rainfall in June, both net uptake and Normalized Difference Vegetation Index were less than in the other three years. Measurements in 2016 and 2017 suggest that the mechanism behind this variability may be rainfall decreasing soil salinity which has been shown to strongly control productivity. The net ecosystem carbon balance was determined as burial rate from four sediment cores using radiometric dating and was lower than the net uptake measured by eddy covariance (mean: 110 ± 13 g C m-2 a-1). The difference between these estimates was significant and may be because the atmospheric measurements do not capture lateral carbon fluxes due to tidal exchange. Overall, it was smaller than values reported in the literature for lateral fluxes and highlights the importance of investigating lateral C fluxes in future studies.

  2. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    Science.gov (United States)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  3. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  4. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun; Belabbes, Abderrezak; Manchon, Aurelien

    2017-01-01

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  5. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun

    2017-10-29

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  6. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  7. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  8. Kovacs effect and fluctuation-dissipation relations in 1D kinetically constrained models

    International Nuclear Information System (INIS)

    Buhot, Arnaud

    2003-01-01

    Strong and fragile glass relaxation behaviours are obtained simply changing the constraints of the kinetically constrained Ising chain from symmetric to purely asymmetric. We study the out-of-equilibrium dynamics of these two models focusing on the Kovacs effect and the fluctuation-dissipation (FD) relations. The Kovacs or memory effect, commonly observed in structural glasses, is present for both constraints but enhanced with the asymmetric ones. Most surprisingly, the related FD relations satisfy the FD theorem in both cases. This result strongly differs from the simple quenching procedure where the asymmetric model presents strong deviations from the FD theorem

  9. A cost-constrained model of strategic service quality emphasis in nursing homes.

    Science.gov (United States)

    Davis, M A; Provan, K G

    1996-02-01

    This study employed structural equation modeling to test the relationship between three aspects of the environmental context of nursing homes; Medicaid dependence, ownership status, and market demand, and two basic strategic orientations: low cost and differentiation based on service quality emphasis. Hypotheses were proposed and tested against data collected from a sample of nursing homes operating in a single state. Because of the overwhelming importance of cost control in the nursing home industry, a cost constrained strategy perspective was supported. Specifically, while the three contextual variables had no direct effect on service quality emphasis, the entire model was supported when cost control orientation was introduced as a mediating variable.

  10. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  11. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    Science.gov (United States)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental

  12. The global economic long-term potential of modern biomass in a climate-constrained world

    Science.gov (United States)

    Klein, David; Humpenöder, Florian; Bauer, Nico; Dietrich, Jan Philipp; Popp, Alexander; Bodirsky, Benjamin Leon; Bonsch, Markus; Lotze-Campen, Hermann

    2014-07-01

    Low-stabilization scenarios consistent with the 2 °C target project large-scale deployment of purpose-grown lignocellulosic biomass. In case a GHG price regime integrates emissions from energy conversion and from land-use/land-use change, the strong demand for bioenergy and the pricing of terrestrial emissions are likely to coincide. We explore the global potential of purpose-grown lignocellulosic biomass and ask the question how the supply prices of biomass depend on prices for greenhouse gas (GHG) emissions from the land-use sector. Using the spatially explicit global land-use optimization model MAgPIE, we construct bioenergy supply curves for ten world regions and a global aggregate in two scenarios, with and without a GHG tax. We find that the implementation of GHG taxes is crucial for the slope of the supply function and the GHG emissions from the land-use sector. Global supply prices start at 5 GJ-1 and increase almost linearly, doubling at 150 EJ (in 2055 and 2095). The GHG tax increases bioenergy prices by 5 GJ-1 in 2055 and by 10 GJ-1 in 2095, since it effectively stops deforestation and thus excludes large amounts of high-productivity land. Prices additionally increase due to costs for N2O emissions from fertilizer use. The GHG tax decreases global land-use change emissions by one-third. However, the carbon emissions due to bioenergy production increase by more than 50% from conversion of land that is not under emission control. Average yields required to produce 240 EJ in 2095 are roughly 600 GJ ha-1 yr-1 with and without tax.

  13. The global economic long-term potential of modern biomass in a climate-constrained world

    International Nuclear Information System (INIS)

    Klein, David; Humpenöder, Florian; Bauer, Nico; Dietrich, Jan Philipp; Popp, Alexander; Leon Bodirsky, Benjamin; Bonsch, Markus; Lotze-Campen, Hermann

    2014-01-01

    Low-stabilization scenarios consistent with the 2 °C target project large-scale deployment of purpose-grown lignocellulosic biomass. In case a GHG price regime integrates emissions from energy conversion and from land-use/land-use change, the strong demand for bioenergy and the pricing of terrestrial emissions are likely to coincide. We explore the global potential of purpose-grown lignocellulosic biomass and ask the question how the supply prices of biomass depend on prices for greenhouse gas (GHG) emissions from the land-use sector. Using the spatially explicit global land-use optimization model MAgPIE, we construct bioenergy supply curves for ten world regions and a global aggregate in two scenarios, with and without a GHG tax. We find that the implementation of GHG taxes is crucial for the slope of the supply function and the GHG emissions from the land-use sector. Global supply prices start at $5 GJ −1 and increase almost linearly, doubling at 150 EJ (in 2055 and 2095). The GHG tax increases bioenergy prices by $5 GJ −1 in 2055 and by $10 GJ −1 in 2095, since it effectively stops deforestation and thus excludes large amounts of high-productivity land. Prices additionally increase due to costs for N 2 O emissions from fertilizer use. The GHG tax decreases global land-use change emissions by one-third. However, the carbon emissions due to bioenergy production increase by more than 50% from conversion of land that is not under emission control. Average yields required to produce 240 EJ in 2095 are roughly 600 GJ ha −1 yr −1 with and without tax. (letter)

  14. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  15. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  16. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  17. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  18. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  19. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  20. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  1. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  2. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    Science.gov (United States)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  3. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  4. A supply function model for representing the strategic bidding of the producers in constrained electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Napoli, Roberto; Lu, Wene; Jiang, Xiuchen

    2010-01-01

    The modeling of the bidding behaviour of the producer is a key-point in the modeling and simulation of the competitive electricity markets. In our paper, the linear supply function model is applied so as to find the Supply Function Equilibrium analytically. It also proposed a new and efficient approach to find SFEs for the network constrained electricity markets by finding the best slope of the supply function with the help of changing the intercept, and the method can be applied on the large systems. The approach proposed is applied to study IEEE-118 bus test systems and the comparison between bidding slope and bidding intercept is presented, as well, with reference to the test system. (author)

  5. Chance-constrained programming models for capital budgeting with NPV as fuzzy parameters

    Science.gov (United States)

    Huang, Xiaoxia

    2007-01-01

    In an uncertain economic environment, experts' knowledge about outlays and cash inflows of available projects consists of much vagueness instead of randomness. Investment outlays and annual net cash flows of a project are usually predicted by using experts' knowledge. Fuzzy variables can overcome the difficulties in predicting these parameters. In this paper, capital budgeting problem with fuzzy investment outlays and fuzzy annual net cash flows is studied based on credibility measure. Net present value (NPV) method is employed, and two fuzzy chance-constrained programming models for capital budgeting problem are provided. A fuzzy simulation-based genetic algorithm is provided for solving the proposed model problems. Two numerical examples are also presented to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  6. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  7. The Balance-of-Payments-Constrained Growth Model and the Limits to Export-Led Growth

    Directory of Open Access Journals (Sweden)

    Robert A. Blecker

    2000-12-01

    Full Text Available This paper discusses how A. P. Thirlwall's model of balance-of-payments-constrained growth can be adapted to analyze the idea of a "fallacy of composition" in the export-led growth strategy of many developing countries. The Deaton-Muellbauer model of the Almost Ideal Demand System (AIDS is used to represent the adding-up constraints on individual countries' exports, when they are all trying to export competing products to the same foreign markets (i.e. newly industrializing countries are exporting similar types of manufactured goods to the OECD countries. The relevance of the model to the recent financial crises in developing countries and policy alternatives for redirecting development strategies are also discussed.

  8. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  9. Structural model of the Northern Latium volcanic area constrained by MT, gravity and aeromagnetic data

    Directory of Open Access Journals (Sweden)

    P. Gasparini

    1997-06-01

    Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.

  10. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    Science.gov (United States)

    Dossett, Jason; Hu, Bin; Parkinson, David

    2014-03-01

    In order to explain cosmic acceleration without invoking ``dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B0, we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B0 by an order of magnitude, giving log10(B0) explanation.

  11. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Constraining the dark energy models with H (z ) data: An approach independent of H0

    Science.gov (United States)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  13. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  14. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  15. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  16. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  17. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  18. Short-term and long-term earthquake occurrence models for Italy: ETES, ERS and LTST

    Directory of Open Access Journals (Sweden)

    Maura Murru

    2010-11-01

    Full Text Available This study describes three earthquake occurrence models as applied to the whole Italian territory, to assess the occurrence probabilities of future (M ≥5.0 earthquakes: two as short-term (24 hour models, and one as long-term (5 and 10 years. The first model for short-term forecasts is a purely stochastic epidemic type earthquake sequence (ETES model. The second short-term model is an epidemic rate-state (ERS forecast based on a model that is physically constrained by the application to the earthquake clustering of the Dieterich rate-state constitutive law. The third forecast is based on a long-term stress transfer (LTST model that considers the perturbations of earthquake probability for interacting faults by static Coulomb stress changes. These models have been submitted to the Collaboratory for the Study of Earthquake Predictability (CSEP for forecast testing for Italy (ETH-Zurich, and they were locked down to test their validity on real data in a future setting starting from August 1, 2009.

  19. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    International Nuclear Information System (INIS)

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-01

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model

  20. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, M. [Lockheed Martin Solar and Astrophysics Lab, Palo Alto, CA 94304 (United States); Manchester, W. B.; Van der Holst, B.; Sokolov, I.; Tóth, G.; Gombosi, T. I. [Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Mullinix, R. E.; Taktakishvili, A.; Chulaki, A., E-mail: jinmeng@lmsal.com, E-mail: chipm@umich.edu, E-mail: richard.e.mullinix@nasa.gov, E-mail: Aleksandre.Taktakishvili-1@nasa.gov [Community Coordinated Modeling Center, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2017-01-10

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).

  1. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  2. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  3. Constraining climate sensitivity and continental versus seafloor weathering using an inverse geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Catling, David C

    2017-05-22

    The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is  K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.

  4. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    Science.gov (United States)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction

  5. Spinal 5-HT7 Receptors and Protein Kinase A Constrain Intermittent Hypoxia-Induced Phrenic Long-term Facilitation

    Science.gov (United States)

    Hoffman, M.S.; Mitchell, G.S.

    2013-01-01

    Phrenic long-term facilitation (pLTF) is a form of serotonin-dependent respiratory plasticity induced by acute intermittent hypoxia (AIH). pLTF requires spinal Gq protein-coupled serotonin-2 receptor (5-HT2) activation, new synthesis of brain-derived neurotrophic factor (BDNF) and activation of its high-affinity receptor, TrkB. Intrathecal injections of selective agonists for Gs protein-coupled receptors (adenosine 2A and serotonin-7; 5-HT7) also induce long-lasting phrenic motor facilitation via TrkB “trans-activation.” Since serotonin release near phrenic motor neurons may activate multiple serotonin receptor subtypes, we tested the hypothesis that 5-HT7 receptor activation contributes to AIH-induced pLTF. A selective 5-HT7 receptor antagonist (SB-269970, 5mM, 12μl) was administered intrathecally at C4 to anesthetized, vagotomized and ventilated rats prior to AIH (3, 5-min episodes, 11% O2). Contrary to predictions, pLTF was greater in SB-269970 treated versus control rats (80±11% vs 45±6% 60 min post-AIH; p<0.05). Hypoglossal LTF was unaffected by spinal 5-HT7 receptor inhibition, suggesting that drug effects were localized to the spinal cord. Since 5-HT7 receptors are coupled to protein kinase A (PKA), we tested the hypothesis that PKA inhibits AIH-induced pLTF. Similar to 5-HT7 receptor inhibition, spinal PKA inhibition (KT-5720, 100μM, 15μl) enhanced pLTF (99±15% 60 min post-AIH; p<0.05). Conversely, PKA activation (8-br-cAMP, 100μM, 15μl) blunted pLTF versus control rats (16±5% vs 45±6% 60 min post-AIH; p<0.05). These findings suggest a novel mechanism whereby spinal Gs protein-coupled 5-HT7 receptors constrain AIH-induced pLTF via PKA activity. PMID:23850591

  6. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  7. Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial

    Directory of Open Access Journals (Sweden)

    A. Robinson

    2011-04-01

    Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ18O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.

  8. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  9. Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.

    Science.gov (United States)

    Starcevic, Vladan

    2017-06-01

    Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.

  10. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    Science.gov (United States)

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Chance-constrained/stochastic linear programming model for acid rain abatement. I. Complete colinearity and noncolinearity

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J H; McBean, E A; Farquhar, G J

    1985-01-01

    A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO/sub 2/ emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty, Chance-Constrained Programming and Stochastic Linear Programming. A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.

  12. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  13. Commitment Versus Persuasion in the Three-Party Constrained Voter Model

    Science.gov (United States)

    Mobilia, Mauro

    2013-04-01

    In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.

  14. Constrained structural dynamic model verification using free vehicle suspension testing methods

    Science.gov (United States)

    Blair, Mark A.; Vadlamudi, Nagarjuna

    1988-01-01

    Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.

  15. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    International Nuclear Information System (INIS)

    Dossett, Jason; Parkinson, David; Hu, Bin

    2014-01-01

    In order to explain cosmic acceleration without invoking ''dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B 0 , we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B 0 by an order of magnitude, giving log 10 (B 0 ) < −4.07 at 95% confidence limit. Finally, we test whether the effect of adding the lensing amplitude A Lens and the sum of the neutrino mass ∑m ν is able to reconcile current tensions present in these parameters, but find f(R) gravity an inadequate explanation

  16. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  17. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    Science.gov (United States)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  18. Constraining soil C cycling with strategic, adaptive action for data and model reporting

    Science.gov (United States)

    Harden, J. W.; Swanston, C.; Hugelius, G.

    2015-12-01

    Regional to global carbon assessments include a variety of models, data sets, and conceptual structures. This includes strategies for representing the role and capacity of soils to sequester, release, and store carbon. Traditionally, many soil carbon data sets emerged from agricultural missions focused on mapping and classifying soils to enhance and protect production of food and fiber. More recently, soil carbon assessments have allowed for more strategic measurement to address the functional and spatially explicit role that soils play in land-atmosphere carbon exchange. While soil data sets are increasingly inter-comparable and increasingly sampled to accommodate global assessments, soils remain poorly constrained or understood with regard to their role in spatio-temporal variations in carbon exchange. A more deliberate approach to rapid improvement in our understanding involves a community-based activity than embraces both a nimble data repository and a dynamic structure for prioritization. Data input and output can be transparent and retrievable as data-derived products, while also being subjected to rigorous queries for merging and harmonization into a searchable, comprehensive, transparent database. Meanwhile, adaptive action groups can prioritize data and modeling needs that emerge through workshops, meta-data analyses or model testing. Our continual renewal of priorities should address soil processes, mechanisms, and feedbacks that significantly influence global C budgets and/or significantly impact the needs and services of regional soil resources that are impacted by C management. In order to refine the International Soil Carbon Network, we welcome suggestions for such groups to be led on topics such as but not limited to manipulation experiments, extreme climate events, post-disaster C management, past climate-soil interactions, or water-soil-carbon linkages. We also welcome ideas for a business model that can foster and promote idea and data sharing.

  19. A Kinematic Model of Slow Slip Constrained by Tremor-Derived Slip Histories in Cascadia

    Science.gov (United States)

    Schmidt, D. A.; Houston, H.

    2016-12-01

    We explore new ways to constrain the kinematic slip distributions for large slow slip events using constraints from tremor. Our goal is to prescribe one or more slip pulses that propagate across the fault and scale appropriately to satisfy the observations. Recent work (Houston, 2015) inferred a crude representative stress time history at an average point using the tidal stress history, the static stress drop, and the timing of the evolution of tidal sensitivity of tremor over several days of slip. To convert a stress time history into a slip time history, we use simulations to explore the stressing history of a small locked patch due to an approaching rupture front. We assume that the locked patch releases strain through a series of tremor bursts whose activity rate is related to the stressing history. To test whether the functional form of a slip pulse is reasonable, we assume a hypothetical slip time history (Ohnaka pulse) timed with the occurrence of tremor to create a rupture front that propagates along the fault. The duration of the rupture front for a fault patch is constrained by the observed tremor catalog for the 2010 ETS event. The slip amplitude is scaled appropriately to match the observed surface displacements from GPS. Through a forward simulation, we evaluate the ability of the tremor-derived slip history to accurately predict the pattern of surface displacements observed by GPS. We find that the temporal progression of surface displacements are well modeled by a 2-4 day slip pulse, suggesting that some of the longer duration of slip typically found in time-dependent GPS inversions is biased by the temporal smoothing. However, at some locations on the fault, the tremor lingers beyond the passage of the slip pulse. A small percentage (5-10%) of the tremor appears to be activated ahead of the approaching slip pulse, and tremor asperities experience a driving stress on the order of 10 kPa/day. Tremor amplitude, rather than just tremor counts, is needed

  20. Supporting the search for the CEP location with nonlocal PNJL models constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Contrera, Gustavo A. [IFLP, UNLP, CONICET, Facultad de Ciencias Exactas, La Plata (Argentina); Gravitation, Astrophysics and Cosmology Group, FCAyG, UNLP, La Plata (Argentina); CONICET, Buenos Aires (Argentina); Grunfeld, A.G. [CONICET, Buenos Aires (Argentina); Comision Nacional de Energia Atomica, Departamento de Fisica, Buenos Aires (Argentina); Blaschke, David [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Joint Institute for Nuclear Research, Moscow Region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation)

    2016-08-15

    We investigate the possible location of the critical endpoint in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift Δ P/T {sup 4} vs. T/T{sub c} with lattice QCD results shows a better agreement when Lorentzian form factors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study, albeit presently performed at the mean-field level, supports the very existence of a critical point and favors its location within a region that is accessible in experiments at the NICA accelerator complex. (orig.)

  1. CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand

    Directory of Open Access Journals (Sweden)

    Rajendra Shrestha

    2013-04-01

    Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.

  2. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.

    Science.gov (United States)

    Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R

    2016-11-16

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  3. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    Science.gov (United States)

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  5. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  6. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  7. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....

  8. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  9. Constraining groundwater flow model with geochemistry in the FUA and Cabril sites. Use in the ENRESA 2000 PA exercise

    International Nuclear Information System (INIS)

    Samper, J.; Carrera, J.; Bajos, C.; Astudillo, J.; Santiago, J.L.

    1999-01-01

    Hydrogeochemical activities have been a key factor for the verification and constraining of the groundwater flow model developed for the safety assessment of the FUA Uranium mill tailings restoration and the Cabril L/ILW disposal facility. The lesson learned in both sites will be applied to the ground water transport modelling in the current PA exercises (ENRESA 2000). The groundwater flow model in the Cabril site, represents a low permeability fractured media, and was performed using the TRANSIN code series developed by UPC-ENRESA. The hydrogeochemical data obtained from systematic yearly sampling and analysis campaigns were successfully applied to distinguish between local and regional flow and young and old groundwater. The salinity content, mainly the chlorine anion content, was the most critical hydrogeochemical data for constraining the groundwater flow model. (author)

  10. Estimation of p,p'-DDT degradation in soil by modeling and constraining hydrological and biogeochemical controls.

    Science.gov (United States)

    Sanka, Ondrej; Kalina, Jiri; Lin, Yan; Deutscher, Jan; Futter, Martyn; Butterfield, Dan; Melymuk, Lisa; Brabec, Karel; Nizzetto, Luca

    2018-08-01

    Despite not being used for decades in most countries, DDT remains ubiquitous in soils due to its persistence and intense past usage. Because of this it is still a pollutant of high global concern. Assessing long term dissipation of DDT from this reservoir is fundamental to understand future environmental and human exposure. Despite a large research effort, key properties controlling fate in soil (in particular, the degradation half-life (τ soil )) are far from being fully quantified. This paper describes a case study in a large central European catchment where hundreds of measurements of p,p'-DDT concentrations in air, soil, river water and sediment are available for the last two decades. The goal was to deliver an integrated estimation of τ soil by constraining a state-of-the-art hydrobiogeochemical-multimedia fate model of the catchment against the full body of empirical data available for this area. The INCA-Contaminants model was used for this scope. Good predictive performance against an (external) dataset of water and sediment concentrations was achieved with partitioning properties taken from the literature and τ soil estimates obtained from forcing the model against empirical historical data of p,p'-DDT in the catchment multicompartments. This approach allowed estimation of p,p'-DDT degradation in soil after taking adequate consideration of losses due to runoff and volatilization. Estimated τ soil ranged over 3000-3800 days. Degradation was the most important loss process, accounting on a yearly basis for more than 90% of the total dissipation. The total dissipation flux from the catchment soils was one order of magnitude higher than the total current atmospheric input estimated from atmospheric concentrations, suggesting that the bulk of p,p'-DDT currently being remobilized or lost is essentially that accumulated over two decades ago. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Systematic Constraint Selection Strategy for Rate-Controlled Constrained-Equilibrium Modeling of Complex Nonequilibrium Chemical Kinetics

    Science.gov (United States)

    Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad

    2018-04-01

    Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.

  12. Constraining supersymmetric models using Higgs physics, precision observables and direct searches

    International Nuclear Information System (INIS)

    Zeune, Lisa

    2014-08-01

    We present various complementary possibilities to exploit experimental measurements in order to test and constrain supersymmetric (SUSY) models. Direct searches for SUSY particles have not resulted in any signal so far, and limits on the SUSY parameter space have been set. Measurements of the properties of the observed Higgs boson at ∝126 GeV as well as of the W boson mass (M W ) can provide valuable indirect constraints, supplementing the ones from direct searches. This thesis is divided into three major parts: In the first part we present the currently most precise prediction for M W in the Minimal Supersymmetric Standard Model (MSSM) with complex parameters and in the Next-to-Minimal Supersymmetric Standard Model (NMSSM). The evaluation includes the full one-loop result and all relevant available higher order corrections of Standard Model (SM) and SUSY type. We perform a detailed scan over the MSSM parameter space, taking into account the latest experimental results, including the observation of a Higgs signal. We find that the current measurements for M W and the top quark mass (m t ) slightly favour a non-zero SUSY contribution. The impact of different SUSY sectors on the prediction of M W as well as the size of the higher-order SUSY corrections are analysed both in the MSSM and the NMSSM. We investigate the genuine NMSSM contribution from the extended Higgs and neutralino sectors and highlight differences between the M W predictions in the two SUSY models. In the second part of the thesis we discuss possible interpretations of the observed Higgs signal in SUSY models. The properties of the observed Higgs boson are compatible with the SM so far, but many other interpretations are also possible. Performing scans over the relevant parts of the MSSM and the NMSSM parameter spaces and applying relevant constraints from Higgs searches, flavour physics and electroweak measurements, we find that a Higgs boson at ∝126 GeV, which decays into two photons, can in

  13. Stochastic risk-constrained short-term scheduling of industrial cogeneration systems in the presence of demand response programs

    International Nuclear Information System (INIS)

    Alipour, Manijeh; Mohammadi-Ivatloo, Behnam; Zare, Kazem

    2014-01-01

    Highlights: • Short-term self-scheduling problem of customers with CHP units is conducted. • Power demand and pool prices are forecasted using ARIMA models. • Risk management problem is conducted by implementing CVaR methodology. • The demand response program is implemented in self-scheduling problem of CHP units. • Non-convex feasible operation region in different types of CHP units is modeled. - Abstract: This paper presents a stochastic programming framework for solving the scheduling problem faced by an industrial customer with cogeneration facilities, conventional power production system, and heat only units. The power and heat demands of the customer are supplied considering demand response (DR) programs. In the proposed DR program, the responsive load can vary in different time intervals. In the paper, the heat-power dual dependency characteristic in different types of CHP units is taken into account. In addition, a heat buffer tank, with the ability of heat storage, has been incorporated in the proposed framework. The impact of the market and load uncertainties on the scheduling problem is characterized through a stochastic programming formulation. Autoregressive integrated moving average (ARIMA) technique is used to generate the electricity price and the customer demand scenarios. The daily and weekly seasonalities of demand and market prices are taken into account in the scenario generation procedure. The conditional value-at-risk (CVaR) methodology is implemented in order to limit the risk of expected profit due to market price and load forecast volatilities

  14. Time-constrained mother and expanding market: emerging model of under-nutrition in India

    Directory of Open Access Journals (Sweden)

    S. Chaturvedi

    2016-07-01

    Full Text Available Abstract Background Persistent high levels of under-nutrition in India despite economic growth continue to challenge political leadership and policy makers at the highest level. The present inductive enquiry was conducted to map the perceptions of mothers and other key stakeholders, to identify emerging drivers of childhood under-nutrition. Methods We conducted a multi-centric qualitative investigation in six empowered action group states of India. The study sample included 509 in-depth interviews with mothers of undernourished and normal nourished children, policy makers, district level managers, implementer and facilitators. Sixty six focus group discussions and 72 non-formal interactions were conducted in two rounds with primary caretakers of undernourished children, Anganwadi Workers and Auxiliary Nurse Midwives. Results Based on the perceptions of the mothers and other key stakeholders, a model evolved inductively showing core themes as drivers of under-nutrition. The most forceful emerging themes were: multitasking, time constrained mother with dwindling family support; fragile food security or seasonal food paucity; child targeted market with wide availability and consumption of ready-to-eat market food items; rising non-food expenditure, in the context of rising food prices; inadequate and inappropriate feeding; delayed recognition of under-nutrition and delayed care seeking; and inadequate responsiveness of health care system and Integrated Child Development Services (ICDS. The study emphasized that the persistence of child malnutrition in India is also tied closely to the high workload and consequent time constraint of mothers who are increasingly pursuing income generating activities and enrolled in paid labour force, without robust institutional support for childcare. Conclusion The emerging framework needs to be further tested through mixed and multiple method research approaches to quantify the contribution of time limitation of

  15. Robust and Efficient Constrained DFT Molecular Dynamics Approach for Biochemical Modeling

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Levy, B.; Demachy, I.; de la Lande, A.

    2012-01-01

    Roč. 8, č. 2 (2012), s. 418-427 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : constrained density functional the ory * electron transfer * density fitting Subject RIV: CF - Physical ; The oretical Chemistry Impact factor: 5.389, year: 2012

  16. GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake

    Science.gov (United States)

    Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.

    2011-10-01

    The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided

  17. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  18. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  19. Modeling Studies to Constrain Fluid and Gas Migration Associated with Hydraulic Fracturing Operations

    Science.gov (United States)

    Rajaram, H.; Birdsell, D.; Lackey, G.; Karra, S.; Viswanathan, H. S.; Dempsey, D.

    2015-12-01

    The dramatic increase in the extraction of unconventional oil and gas resources using horizontal wells and hydraulic fracturing (fracking) technologies has raised concerns about potential environmental impacts. Large volumes of hydraulic fracturing fluids are injected during fracking. Incidents of stray gas occurrence in shallow aquifers overlying shale gas reservoirs have been reported; whether these are in any way related to fracking continues to be debated. Computational models serve as useful tools for evaluating potential environmental impacts. We present modeling studies of hydraulic fracturing fluid and gas migration during the various stages of well operation, production, and subsequent plugging. The fluid migration models account for overpressure in the gas reservoir, density contrast between injected fluids and brine, imbibition into partially saturated shale, and well operations. Our results highlight the importance of representing the different stages of well operation consistently. Most importantly, well suction and imbibition both play a significant role in limiting upward migration of injected fluids, even in the presence of permeable connecting pathways. In an overall assessment, our fluid migration simulations suggest very low risk to groundwater aquifers when the vertical separation from a shale gas reservoir is of the order of 1000' or more. Multi-phase models of gas migration were developed to couple flow and transport in compromised wellbores and subsurface formations. These models are useful for evaluating both short-term and long-term scenarios of stray methane release. We present simulation results to evaluate mechanisms controlling stray gas migration, and explore relationships between bradenhead pressures and the likelihood of methane release and transport.

  20. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus

    2013-07-01

    In terrestrial biosphere models, key biochemical controls on carbon uptake by vegetation canopies are typically assigned fixed literature-based values for broad categories of vegetation types although in reality significant spatial and temporal variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based retrievals of leaf chlorophyll for estimating leaf photosynthetic capacity and for constraining model simulations of water and carbon fluxes. © 2013 IEEE.

  1. Constraining local 3-D models of the saturated-zone, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Barr, G.E.; Shannon, S.A.

    1994-01-01

    A qualitative three-dimensional analysis of the saturated zone flow system was performed for a 8 km x 8 km region including the potential Yucca Mountain repository site. Certain recognized geologic features of unknown hydraulic properties were introduced to assess the general response of the flow field to these features. Two of these features, the Solitario Canyon fault and the proposed fault in Drill Hole Wash, appear to constrain flow and allow calibration

  2. Constraining the dynamics of the water budget at high spatial resolution in the world's water towers using models and remote sensing data; Snake River Basin, USA

    Science.gov (United States)

    Watson, K. A.; Masarik, M. T.; Flores, A. N.

    2016-12-01

    Mountainous, snow-dominated basins are often referred to as the water towers of the world because they store precipitation in seasonal snowpacks, which gradually melt and provide water supplies to downstream communities. Yet significant uncertainties remain in terms of quantifying the stores and fluxes of water in these regions as well as the associated energy exchanges. Constraining these stores and fluxes is crucial for advancing process understanding and managing these water resources in a changing climate. Remote sensing data are particularly important to these efforts due to the remoteness of these landscapes and high spatial variability in water budget components. We have developed a high resolution regional climate dataset extending from 1986 to the present for the Snake River Basin in the northwestern USA. The Snake River Basin is the largest tributary of the Columbia River by volume and a critically important basin for regional economies and communities. The core of the dataset was developed using a regional climate model, forced by reanalysis data. Specifically the Weather Research and Forecasting (WRF) model was used to dynamically downscale the North American Regional Reanalysis (NARR) over the region at 3 km horizontal resolution for the period of interest. A suite of satellite remote sensing products provide independent, albeit uncertain, constraint on a number of components of the water and energy budgets for the region across a range of spatial and temporal scales. For example, GRACE data are used to constrain basinwide terrestrial water storage and MODIS products are used to constrain the spatial and temporal evolution of evapotranspiration and snow cover. The joint use of both models and remote sensing products allows for both better understanding of water cycle dynamics and associated hydrometeorologic processes, and identification of limitations in both the remote sensing products and regional climate simulations.

  3. Constraining Silicate Weathering Processes in an Active Volcanic Complex: Implications for the Long-term Carbon Cycle

    Science.gov (United States)

    Washington, K.; West, A. J.; Hartmann, J.; Amann, T.; Hosono, T.; Ide, K.

    2017-12-01

    While analyzing geochemical archives and carbon cycle modelling can further our understanding of the role of silicate weathering as a sink in the long-term carbon cycle, it is necessary to study modern weathering processes to inform these efforts. A recent compilation of data from rivers draining basaltic catchments estimates that rock weathering in active volcanic fields (AVFs) consumes atmospheric CO2 approximately three times faster than in inactive volcanic fields (IVFs), suggesting that the eruption and subsequent weathering of large igneous provinces likely played a major role in the carbon cycle in the geologic past [1]. The study demonstrates a significant correlation between catchment mean annual temperature (MAT) and atmospheric CO2 consumption rate for IVFs. However CO2 consumption due to weathering of AVFs is not correlated with MAT as the relationship is complicated by variability in hydrothermal fluxes, reactive surface area, and groundwater flow paths. To investigate the controls on weathering processes in AVFs, we present data for dissolved and solid weathering products from Mount Aso Caldera, Japan. Aso Caldera is an ideal site for studying the how the chemistry of rivers draining an AVF is impacted by high-temperature water/rock interactions, volcanic ash weathering, and varied groundwater flow paths and residence times. Samples were collected over five field seasons from two rivers and their tributaries, cold groundwater springs, and thermal springs. These samples capture the region's temperature and precipitation seasonality. Solid samples of unaltered volcanic rocks, hydrothermally-altered materials, volcanic ash, a soil profile, and suspended and bedload river sediments were also collected. The hydrochemistry of dissolved phases were analyzed at the University of Hamburg, while the mineralogy and geochemical compositions of solid phases were analyzed at the Natural History Museum of Los Angeles. This work will be discussed in the context of

  4. Bone architecture adaptations after spinal cord injury: impact of long-term vibration of a constrained lower limb.

    Science.gov (United States)

    Dudley-Javoroski, S; Petrie, M A; McHenry, C L; Amelon, R E; Saha, P K; Shields, R K

    2016-03-01

    This study examined the effect of a controlled dose of vibration upon bone density and architecture in people with spinal cord injury (who eventually develop severe osteoporosis). Very sensitive computed tomography (CT) imaging revealed no effect of vibration after 12 months, but other doses of vibration may still be useful to test. The purposes of this report were to determine the effect of a controlled dose of vibratory mechanical input upon individual trabecular bone regions in people with chronic spinal cord injury (SCI) and to examine the longitudinal bone architecture changes in both the acute and chronic state of SCI. Participants with SCI received unilateral vibration of the constrained lower limb segment while sitting in a wheelchair (0.6g, 30 Hz, 20 min, three times weekly). The opposite limb served as a control. Bone mineral density (BMD) and trabecular micro-architecture were measured with high-resolution multi-detector CT. For comparison, one participant was studied from the acute (0.14 year) to the chronic state (2.7 years). Twelve months of vibration training did not yield adaptations of BMD or trabecular micro-architecture for the distal tibia or the distal femur. BMD and trabecular network length continued to decline at several distal femur sub-regions, contrary to previous reports suggesting a "steady state" of bone in chronic SCI. In the participant followed from acute to chronic SCI, BMD and architecture decline varied systematically across different anatomical segments of the tibia and femur. This study supports that vibration training, using this study's dose parameters, is not an effective anti-osteoporosis intervention for people with chronic SCI. Using a high-spatial-resolution CT methodology and segmental analysis, we illustrate novel longitudinal changes in bone that occur after spinal cord injury.

  5. Baby Skyrme models without a potential term

    Science.gov (United States)

    Ashcroft, Jennifer; Haberichter, Mareike; Krusch, Steffen

    2015-05-01

    We develop a one-parameter family of static baby Skyrme models that do not require a potential term to admit topological solitons. This is a novel property as the standard baby Skyrme model must contain a potential term in order to have stable soliton solutions, though the Skyrme model does not require this. Our new models satisfy an energy bound that is linear in terms of the topological charge and can be saturated in an extreme limit. They also satisfy a virial theorem that is shared by the Skyrme model. We calculate the solitons of our new models numerically and observe that their form depends significantly on the choice of parameter. In one extreme, we find compactons while at the other there is a scale invariant model in which solitons can be obtained exactly as solutions to a Bogomolny equation. We provide an initial investigation into these solitons and compare them with the baby Skyrmions of other models.

  6. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  7. Modeling of Passive Constrained Layer Damping as Applied to a Gun Tube

    Directory of Open Access Journals (Sweden)

    Margaret Z. Kiehl

    2001-01-01

    Full Text Available We study the damping effects of a cantilever beam system consisting of a gun tube wrapped with a constrained viscoelastic polymer on terrain induced vibrations. A time domain solution to the forced motion of this system is developed using the GHM (Golla-Hughes-McTavish method to incorporate the viscoelastic properties of the polymer. An impulse load is applied at the free end and the tip deflection of the cantilevered beam system is determined. The resulting GHM equations are then solved in MATLAB by transformation to the state-space domain.

  8. Nonfragile Robust Model Predictive Control for Uncertain Constrained Systems with Time-Delay Compensation

    Directory of Open Access Journals (Sweden)

    Wei Jiang

    2016-01-01

    Full Text Available This study investigates the problem of asymptotic stabilization for a class of discrete-time linear uncertain time-delayed systems with input constraints. Parametric uncertainty is assumed to be structured, and delay is assumed to be known. In Lyapunov stability theory framework, two synthesis schemes of designing nonfragile robust model predictive control (RMPC with time-delay compensation are put forward, where the additive and the multiplicative gain perturbations are, respectively, considered. First, by designing appropriate Lyapunov-Krasovskii (L-K functions, the robust performance index is defined as optimization problems that minimize upper bounds of infinite horizon cost function. Then, to guarantee closed-loop stability, the sufficient conditions for the existence of desired nonfragile RMPC are obtained in terms of linear matrix inequalities (LMIs. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approaches.

  9. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  10. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  11. Virtual Models of Long-Term Care

    Science.gov (United States)

    Phenice, Lillian A.; Griffore, Robert J.

    2012-01-01

    Nursing homes, assisted living facilities and home-care organizations, use web sites to describe their services to potential consumers. This virtual ethnographic study developed models representing how potential consumers may understand this information using data from web sites of 69 long-term-care providers. The content of long-term-care web…

  12. A Geometrically-Constrained Mathematical Model of Mammary Gland Ductal Elongation Reveals Novel Cellular Dynamics within the Terminal End Bud.

    Directory of Open Access Journals (Sweden)

    Ingrid Paine

    2016-04-01

    Full Text Available Mathematics is often used to model biological systems. In mammary gland development, mathematical modeling has been limited to acinar and branching morphogenesis and breast cancer, without reference to normal duct formation. We present a model of ductal elongation that exploits the geometrically-constrained shape of the terminal end bud (TEB, the growing tip of the duct, and incorporates morphometrics, region-specific proliferation and apoptosis rates. Iterative model refinement and behavior analysis, compared with biological data, indicated that the traditional metric of nipple to the ductal front distance, or percent fat pad filled to evaluate ductal elongation rate can be misleading, as it disregards branching events that can reduce its magnitude. Further, model driven investigations of the fates of specific TEB cell types confirmed migration of cap cells into the body cell layer, but showed their subsequent preferential elimination by apoptosis, thus minimizing their contribution to the luminal lineage and the mature duct.

  13. Modeling of thin-walled structures interacting with acoustic media as constrained two-dimensional continua

    Science.gov (United States)

    Rabinskiy, L. N.; Zhavoronok, S. I.

    2018-04-01

    The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is

  14. Evolution in totally constrained models: Schrödinger vs. Heisenberg pictures

    Science.gov (United States)

    Olmedo, Javier

    2016-06-01

    We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli’s evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this paper is to further analyze this approach at the quantum level. Concretely, we will compare the (Schrödinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this paper (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.

  15. Exploring the biological consequences of conformational changes in aspartame models containing constrained analogues of phenylalanine.

    Science.gov (United States)

    Mollica, Adriano; Mirzaie, Sako; Costante, Roberto; Carradori, Simone; Macedonio, Giorgia; Stefanucci, Azzurra; Dvoracsko, Szabolcs; Novellino, Ettore

    2016-12-01

    The dipeptide aspartame (Asp-Phe-OMe) is a sweetener widely used in replacement of sucrose by food industry. 2',6'-Dimethyltyrosine (DMT) and 2',6'-dimethylphenylalanine (DMP) are two synthetic phenylalanine-constrained analogues, with a limited freedom in χ-space due to the presence of methyl groups in position 2',6' of the aromatic ring. These residues have shown to increase the activity of opioid peptides, such as endomorphins improving the binding to the opioid receptors. In this work, DMT and DMP have been synthesized following a diketopiperazine-mediated route and the corresponding aspartame derivatives (Asp-DMT-OMe and Asp-DMP-OMe) have been evaluated in vivo and in silico for their activity as synthetic sweeteners.

  16. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  17. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  18. Constraining the Physics of AM Canum Venaticorum Systems with the Accretion Disk Instability Model

    Science.gov (United States)

    Cannizzo, John K.; Nelemans, Gijs

    2015-01-01

    Recent work by Levitan et al. has expanded the long-term photometric database for AM CVn stars. In particular, their outburst properties are well correlated with orbital period and allow constraints to be placed on the secular mass transfer rate between secondary and primary if one adopts the disk instability model for the outbursts. We use the observed range of outbursting behavior for AM CVn systems as a function of orbital period to place a constraint on mass transfer rate versus orbital period. We infer a rate approximately 5 x 10(exp -9) solar mass yr(exp -1) ((P(sub orb)/1000 s)(exp -5.2)). We show that the functional form so obtained is consistent with the recurrence time-orbital period relation found by Levitan et al. using a simple theory for the recurrence time. Also, we predict that their steep dependence of outburst duration on orbital period will flatten considerably once the longer orbital period systems have more complete observations.

  19. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  20. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...

  1. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  2. Estimation of microbial respiration rates in groundwater by geochemical modeling constrained with stable isotopes

    International Nuclear Information System (INIS)

    Murphy, E.M.

    1998-01-01

    Changes in geochemistry and stable isotopes along a well-established groundwater flow path were used to estimate in situ microbial respiration rates in the Middendorf aquifer in the southeastern United States. Respiration rates were determined for individual terminal electron acceptors including O 2 , MnO 2 , Fe 3+ , and SO 4 2- . The extent of biotic reactions were constrained by the fractionation of stable isotopes of carbon and sulfur. Sulfur isotopes and the presence of sulfur-oxidizing microorganisms indicated that sulfate is produced through the oxidation of reduced sulfur species in the aquifer and not by the dissolution of gypsum, as previously reported. The respiration rates varied along the flow path as the groundwater transitioned between primarily oxic to anoxic conditions. Iron-reducing microorganisms were the largest contributors to the oxidation of organic matter along the portion of the groundwater flow path investigated in this study. The transition zone between oxic and anoxic groundwater contained a wide range of terminal electron acceptors and showed the greatest diversity and numbers of culturable microorganisms and the highest respiration rates. A comparison of respiration rates measured from core samples and pumped groundwater suggests that variability in respiration rates may often reflect the measurement scales, both in the sample volume and the time-frame over which the respiration measurement is averaged. Chemical heterogeneity may create a wide range of respiration rates when the scale of the observation is below the scale of the heterogeneity

  3. Nonlinear model dynamics for closed-system, constrained, maximal-entropy-generation relaxation by energy redistribution

    International Nuclear Information System (INIS)

    Beretta, Gian Paolo

    2006-01-01

    We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e i with i=1,2,...,N. The time-dependent occupation probabilities p i (t) are assumed to obey the nonlinear rate equations τ dp i /dt=-p i ln p i -α(t)p i -β(t)e i p i where α(t) and β(t) are functionals of the p i (t)'s that maintain invariant the mean energy E=Σ i=1 N e i p i (t) and the normalization condition 1=Σ i=1 N p i (t). The entropy S(t)=-k B Σ i=1 N p i (t)ln p i (t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p i (t) of the rate equations are unique and well defined for arbitrary initial conditions p i (0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p i 's but of their positive square roots √(p i ), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72 (2002)]. We also interpret the functionals k B α(t) and k B β(t) as nonequilibrium generalizations of the thermodynamic-equilibrium Massieu

  4. CONSTRAINING THE GRB-MAGNETAR MODEL BY MEANS OF THE GALACTIC PULSAR POPULATION

    Energy Technology Data Exchange (ETDEWEB)

    Rea, N. [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam (Netherlands); Gullón, M.; Pons, J. A.; Miralles, J. A. [Departament de Fisica Aplicada, Universitat d’Alacant, Ap. Correus 99, E-03080 Alacant (Spain); Perna, R. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Dainotti, M. G. [Physics Department, Stanford University, Via Pueblo Mall 382, Stanford, CA (United States); Torres, D. F. [Instituto de Ciencias de l’Espacio (ICE, CSIC-IEEC), Campus UAB, Carrer Can Magrans s/n, E-08193 Barcelona (Spain)

    2015-11-10

    A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <10{sup 5} s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between 2005 January and 2014 August. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type or rate at z = 0). Direct consequences would be that we should allow the existence of magnetars and “super-magnetars” having different progenitors, and that Type Ib/c SNe related to Long GRBs form systematically neutron stars with higher initial magnetic fields. We put an upper limit of ≤16 “super-magnetars” formed by a GRB in our Galaxy in the past Myr (at 99% c.l.). This limit is somewhat smaller than what is roughly expected from Long GRB rates, although the very large uncertainties do not allow us to draw strong conclusion in this respect.

  5. Simulating the Range Expansion of Spartina alterniflora in Ecological Engineering through Constrained Cellular Automata Model and GIS

    Directory of Open Access Journals (Sweden)

    Zongsheng Zheng

    2015-01-01

    Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.

  6. Murine model of long term obstructive jaundice

    Science.gov (United States)

    Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A.; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki

    2016-01-01

    Background With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of 3 murine models of obstructive jaundice. Methods C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Results 70% (7/10) of tCL mice died by Day 7, whereas majority 67% (10/15) of pCL mice survived with loss of jaundice. 19% (3/16) of LMHL mice died; however, jaundice continued beyond Day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 days after ligation but jaundice rapidly decreased by Day 7. The LHML group developed portal hypertension as well as severe fibrosis by Day 14 in addition to prolonged jaundice. Conclusion The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice but long term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. PMID:27916350

  7. New high-fidelity terrain modeling method constrained by terrain semanteme.

    Directory of Open Access Journals (Sweden)

    Bo Zhou

    Full Text Available Production of higher-fidelity digital elevation models is important; as such models are indispensable components of space data infrastructure. However, loss of terrain features is a constant problem for grid digital elevation models, although these models have already been defined in such a way that their distinct usage as data sources in terrain modeling processing is prohibited. Therefore, in this study, the novel concept-terrain semanteme is proposed to define local space terrain features, and a new process for generating grid digital elevation models based on this new concept is designed. A prototype system is programmed to test the proposed approach; the results indicate that terrain semanteme can be applied in the process of grid digital elevation model generation, and that usage of this new concept improves the digital elevation model fidelity. Moreover, the terrain semanteme technique can be applied for recovery of distorted digital elevation model regions containing terrain semantemes, with good recovery efficiency indicated by experiments.

  8. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  9. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  10. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  11. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  12. Modeling Optical and Radiative Properties of Clouds Constrained with CARDEX Observations

    Science.gov (United States)

    Mishra, S. K.; Praveen, P. S.; Ramanathan, V.

    2013-12-01

    Carbonaceous aerosols (CA) have important effects on climate by directly absorbing solar radiation and indirectly changing cloud properties. These particles tend to be a complex mixture of graphitic carbon and organic compounds. The graphitic component, called as elemental carbon (EC), is characterized by significant absorption of solar radiation. Recent studies showed that organic carbon (OC) aerosols absorb strongly near UV region, and this faction is known as Brown Carbon (BrC). The indirect effect of CA can occur in two ways, first by changing the thermal structure of the atmosphere which further affects dynamical processes governing cloud life cycle; secondly, by acting as cloud condensation nuclei (CCN) that can change cloud radiative properties. In this work, cloud optical properties have been numerically estimated by accounting for CAEDEX (Cloud Aerosol Radiative Forcing Dynamics Experiment) observed cloud parameters and the physico-chemical and optical properties of aerosols. The aerosol inclusions in the cloud drop have been considered as core shell structure with core as EC and shell comprising of ammonium sulfate, ammonium nitrate, sea salt and organic carbon (organic acids, OA and brown carbon, BrC). The EC/OC ratio of the inclusion particles have been constrained based on observations. Moderate and heavy pollution events have been decided based on the aerosol number and BC concentration. Cloud drop's co-albedo at 550nm was found nearly identical for pure EC sphere inclusions and core-shell inclusions with all non-absorbing organics in the shell. However, co-albedo was found to increase for the drop having all BrC in the shell. The co-albedo of a cloud drop was found to be the maximum for all aerosol present as interstitial compare to 50% and 0% inclusions existing as interstitial aerosols. The co-albedo was found to be ~ 9.87e-4 for the drop with 100% inclusions existing as interstitial aerosols externally mixed with micron size mineral dust with 2

  13. Constrained creation of poetic forms during theme-driven exploration of a domain defined by an N-gram model

    Science.gov (United States)

    Gervás, Pablo

    2016-04-01

    Most poetry-generation systems apply opportunistic approaches where algorithmic procedures are applied to explore the conceptual space defined by a given knowledge resource in search of solutions that might be aesthetically valuable. Aesthetical value is assumed to arise from compliance to a given poetic form - such as rhyme or metrical regularity - or from evidence of semantic relations between the words in the resulting poems that can be interpreted as rhetorical tropes - such as similes, analogies, or metaphors. This approach tends to fix a priori the aesthetic parameters of the results, and imposes no constraints on the message to be conveyed. The present paper describes an attempt to initiate a shift in this balance, introducing means for constraining the output to certain topics and allowing a looser mechanism for constraining form. This goal arose as a result of the need to produce poems for a themed collection commissioned to be included in a book. The solution adopted explores an approach to creativity where the goals are not solely aesthetic and where the results may be surprising in their poetic form. An existing computer poet, originally developed to produce poems in a given form but with no specific constraints on their content, is put to the task of producing a set of poems with explicit restrictions on content, and allowing for an exploration of poetic form. Alternative generation methods are devised to overcome the difficulties, and the various insights arising from these new methods and the impact they have on the set of resulting poems are discussed in terms of their potential contribution to better poetry-generation systems.

  14. A Stochastic Multi-Objective Chance-Constrained Programming Model for Water Supply Management in Xiaoqing River Watershed

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2017-05-01

    Full Text Available In this paper, a stochastic multi-objective chance-constrained programming model (SMOCCP was developed for tackling the water supply management problem. Two objectives were included in this model, which are the minimization of leakage loss amounts and total system cost, respectively. The traditional SCCP model required the random variables to be expressed in the normal distributions, although their statistical characteristics were suitably reflected by other forms. The SMOCCP model allows the random variables to be expressed in log-normal distributions, rather than general normal form. Possible solution deviation caused by irrational parameter assumption was avoided and the feasibility and accuracy of generated solutions were ensured. The water supply system in the Xiaoqing River watershed was used as a study case for demonstration. Under the context of various weight combinations and probabilistic levels, many types of solutions are obtained, which are expressed as a series of transferred amounts from water sources to treated plants, from treated plants to reservoirs, as well as from reservoirs to tributaries. It is concluded that the SMOCCP model could reflect the sketch of the studied region and generate desired water supply schemes under complex uncertainties. The successful application of the proposed model is expected to be a good example for water resource management in other watersheds.

  15. A novel robust chance constrained possibilistic programming model for disaster relief logistics under uncertainty

    Directory of Open Access Journals (Sweden)

    Maryam Rahafrooz

    2016-09-01

    Full Text Available In this paper, a novel multi-objective robust possibilistic programming model is proposed, which simultaneously considers maximizing the distributive justice in relief distribution, minimizing the risk of relief distribution, and minimizing the total logistics costs. To effectively cope with the uncertainties of the after-disaster environment, the uncertain parameters of the proposed model are considered in the form of fuzzy trapezoidal numbers. The proposed model not only considers relief commodities priority and demand points priority in relief distribution, but also considers the difference between the pre-disaster and post-disaster supply abilities of the suppliers. In order to solve the proposed model, the LP-metric and the improved augmented ε-constraint methods are used. Second, a set of test problems are designed to evaluate the effectiveness of the proposed robust model against its equivalent deterministic form, which reveales the capabilities of the robust model. Finally, to illustrate the performance of the proposed robust model, a seismic region of northwestern Iran (East Azerbaijan is selected as a case study to model its relief logistics in the face of future earthquakes. This investigation indicates the usefulness of the proposed model in the field of crisis.

  16. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  17. How to constrain multi-objective calibrations of the SWAT model using water balance components

    Science.gov (United States)

    Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...

  18. Model documentation report: Short-Term Hydroelectric Generation Model

    International Nuclear Information System (INIS)

    1993-08-01

    The purpose of this report is to define the objectives of the Short- Term Hydroelectric Generation Model (STHGM), describe its basic approach, and to provide details on the model structure. This report is intended as a reference document for model analysts, users, and the general public. Documentation of the model is in accordance with the Energy Information Administration's (AYE) legal obligation to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). The STHGM performs a short-term (18 to 27- month) forecast of hydroelectric generation in the United States using an autoregressive integrated moving average (UREMIA) time series model with precipitation as an explanatory variable. The model results are used as input for the short-term Energy Outlook

  19. Constrained quadratic stabilization of discrete-time uncertain nonlinear multi-model systems using piecewise affine state-feedback

    Directory of Open Access Journals (Sweden)

    Olav Slupphaug

    1999-07-01

    Full Text Available In this paper a method for nonlinear robust stabilization based on solving a bilinear matrix inequality (BMI feasibility problem is developed. Robustness against model uncertainty is handled. In different non-overlapping regions of the state-space called clusters the plant is assumed to be an element in a polytope which vertices (local models are affine systems. In the clusters containing the origin in their closure, the local models are restricted to be linear systems. The clusters cover the region of interest in the state-space. An affine state-feedback is associated with each cluster. By utilizing the affinity of the local models and the state-feedback, a set of linear matrix inequalities (LMIs combined with a single nonconvex BMI are obtained which, if feasible, guarantee quadratic stability of the origin of the closed-loop. The feasibility problem is attacked by a branch-and-bound based global approach. If the feasibility check is successful, the Liapunov matrix and the piecewise affine state-feedback are given directly by the feasible solution. Control constraints are shown to be representable by LMIs or BMIs, and an application of the control design method to robustify constrained nonlinear model predictive control is presented. Also, the control design method is applied to a simple example.

  20. A Metabolite-Sensitive, Thermodynamically Constrained Model of Cardiac Cross-Bridge Cycling: Implications for Force Development during Ischemia

    KAUST Repository

    Tran, Kenneth; Smith, Nicolas P.; Loiselle, Denis S.; Crampin, Edmund J.

    2010-01-01

    We present a metabolically regulated model of cardiac active force generation with which we investigate the effects of ischemia on maximum force production. Our model, based on a model of cross-bridge kinetics that was developed by others, reproduces many of the observed effects of MgATP, MgADP, Pi, and H(+) on force development while retaining the force/length/Ca(2+) properties of the original model. We introduce three new parameters to account for the competitive binding of H(+) to the Ca(2+) binding site on troponin C and the binding of MgADP within the cross-bridge cycle. These parameters, along with the Pi and H(+) regulatory steps within the cross-bridge cycle, were constrained using data from the literature and validated using a range of metabolic and sinusoidal length perturbation protocols. The placement of the MgADP binding step between two strongly-bound and force-generating states leads to the emergence of an unexpected effect on the force-MgADP curve, where the trend of the relationship (positive or negative) depends on the concentrations of the other metabolites and [H(+)]. The model is used to investigate the sensitivity of maximum force production to changes in metabolite concentrations during the development of ischemia.

  1. Dissecting galaxy formation models with sensitivity analysis—a new approach to constrain the Milky Way formation history

    International Nuclear Information System (INIS)

    Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.

    2014-01-01

    We present an application of a statistical tool known as sensitivity analysis to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological N-body simulations. We show how a sensitivity analysis can be performed on our chemo-dynamical model, ChemTreeN, to characterize and quantify its relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs (i.e., observational predictions) of interest. Conversely, sensitivity analysis allows us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated with the Milky Way, such as the luminosity function of the dwarf satellites. The results from the sensitivity analysis are used to train specific model emulators of ChemTreeN, only involving the most relevant input parameters. This allowed us to efficiently explore the input parameter space. A statistical comparison of model outputs and real observables is used to obtain a 'best-fitting' parameter set. We consider different Milky-Way-like dark matter halos to account for the dependence of the best-fitting parameter selection process on the underlying merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one of the resulting stellar halo models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis, it is possible to disregard certain models, and their

  2. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  3. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  4. Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Ricardo L.S. [Universidade Federal de Santa Maria, Departamento de Fisica, Santa Maria, RS (Brazil); Kent State University, Physics Department, Kent, OH (United States); Timoteo, Varese S. [Universidade Estadual de Campinas (UNICAMP), Grupo de Optica e Modelagem Numerica (GOMNI), Faculdade de Tecnologia, Limeira, SP (Brazil); Avancini, Sidney S.; Pinto, Marcus B. [Universidade Federal de Santa Catarina, Departamento de Fisica, Florianopolis, Santa Catarina (Brazil); Krein, Gastao [Universidade Estadual Paulista, Instituto de Fisica Teorica, Sao Paulo, SP (Brazil)

    2017-05-15

    The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B, T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with G(B, T) are compared with the ones obtained at constant coupling, G. The model with G(B, T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic-field-dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the coupling constant can be easily implemented to improve typical model applications to magnetized quark matter. (orig.)

  5. Joint modeling of constrained path enumeration and path choice behavior: a semi-compensatory approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2010-01-01

    A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...

  6. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  7. A Three-Dimensional Model of the Marine Nitrogen Cycle during the Last Glacial Maximum Constrained by Sedimentary Isotopes

    Directory of Open Access Journals (Sweden)

    Christopher J. Somes

    2017-05-01

    Full Text Available Nitrogen is a key limiting nutrient that influences marine productivity and carbon sequestration in the ocean via the biological pump. In this study, we present the first estimates of nitrogen cycling in a coupled 3D ocean-biogeochemistry-isotope model forced with realistic boundary conditions from the Last Glacial Maximum (LGM ~21,000 years before present constrained by nitrogen isotopes. The model predicts a large decrease in nitrogen loss rates due to higher oxygen concentrations in the thermocline and sea level drop, and, as a response, reduced nitrogen fixation. Model experiments are performed to evaluate effects of hypothesized increases of atmospheric iron fluxes and oceanic phosphorus inventory relative to present-day conditions. Enhanced atmospheric iron deposition, which is required to reproduce observations, fuels export production in the Southern Ocean causing increased deep ocean nutrient storage. This reduces transport of preformed nutrients to the tropics via mode waters, thereby decreasing productivity, oxygen deficient zones, and water column N-loss there. A larger global phosphorus inventory up to 15% cannot be excluded from the currently available nitrogen isotope data. It stimulates additional nitrogen fixation that increases the global oceanic nitrogen inventory, productivity, and water column N-loss. Among our sensitivity simulations, the best agreements with nitrogen isotope data from LGM sediments indicate that water column and sedimentary N-loss were reduced by 17–62% and 35–69%, respectively, relative to preindustrial values. Our model demonstrates that multiple processes alter the nitrogen isotopic signal in most locations, which creates large uncertainties when quantitatively constraining individual nitrogen cycling processes. One key uncertainty is nitrogen fixation, which decreases by 25–65% in the model during the LGM mainly in response to reduced N-loss, due to the lack of observations in the open ocean most

  8. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.; Evans, Jason P.; McCabe, Matthew

    2014-01-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  9. Constraining the uncertainty in emissions over India with a regional air quality model evaluation

    Science.gov (United States)

    Karambelas, Alexandra; Holloway, Tracey; Kiesewetter, Gregor; Heyes, Chris

    2018-02-01

    To evaluate uncertainty in the spatial distribution of air emissions over India, we compare satellite and surface observations with simulations from the U.S. Environmental Protection Agency (EPA) Community Multi-Scale Air Quality (CMAQ) model. Seasonally representative simulations were completed for January, April, July, and October 2010 at 36 km × 36 km using anthropogenic emissions from the Greenhouse Gas-Air Pollution Interaction and Synergies (GAINS) model following version 5a of the Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants project (ECLIPSE v5a). We use both tropospheric columns from the Ozone Monitoring Instrument (OMI) and surface observations from the Central Pollution Control Board (CPCB) to closely examine modeled nitrogen dioxide (NO2) biases in urban and rural regions across India. Spatial average evaluation with satellite retrievals indicate a low bias in the modeled tropospheric column (-63.3%), which reflects broad low-biases in majority non-urban regions (-70.1% in rural areas) across the sub-continent to slightly lesser low biases reflected in semi-urban areas (-44.7%), with the threshold between semi-urban and rural defined as 400 people per km2. In contrast, modeled surface NO2 concentrations exhibit a slight high bias of +15.6% when compared to surface CPCB observations predominantly located in urban areas. Conversely, in examining extremely population dense urban regions with more than 5000 people per km2 (dense-urban), we find model overestimates in both the column (+57.8) and at the surface (+131.2%) compared to observations. Based on these results, we find that existing emission fields for India may overestimate urban emissions in densely populated regions and underestimate rural emissions. However, if we rely on model evaluation with predominantly urban surface observations from the CPCB, comparisons reflect model high biases, contradictory to the knowledge gained using satellite observations. Satellites thus

  10. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.

    2014-09-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  11. Attitudinal travel demand model for non-work trips of homogeneously constrained segments of a population

    Energy Technology Data Exchange (ETDEWEB)

    Recker, W.W.; Stevens, R.F.

    1977-06-01

    Market-segmentation techniques are used to capture effects of opportunity and availability constraints on urban residents' choice of mode for trips for major grocery shopping and for visiting friends and acquaintances. Attitudinal multinomial logit choice models are estimated for each market segment. Explanatory variables are individual's beliefs about attributes of four modal alternatives: bus, car, taxi and walking. Factor analysis is employed to identify latent dimensions of perception of the modal alternatives and to eliminate problems of multicollinearity in model estimation.

  12. Constraining dark photon model with dark matter from CMB spectral distortions

    Directory of Open Access Journals (Sweden)

    Ki-Young Choi

    2017-08-01

    Full Text Available Many extensions of Standard Model (SM include a dark sector which can interact with the SM sector via a light mediator. We explore the possibilities to probe such a dark sector by studying the distortion of the CMB spectrum from the blackbody shape due to the elastic scatterings between the dark matter and baryons through a hidden light mediator. We in particular focus on the model where the dark sector gauge boson kinetically mixes with the SM and present the future experimental prospect for a PIXIE-like experiment along with its comparison to the existing bounds from complementary terrestrial experiments.

  13. Assessing water resources in Azerbaijan using a local distributed model forced and constrained with global data

    Science.gov (United States)

    Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine

    2017-04-01

    In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well

  14. A frictionally and hydraulically constrained model of the convectively driven mean flow in partially enclosed seas

    Science.gov (United States)

    Maxworthy, T.

    1997-08-01

    A simple three-layer model of the dynamics of partially enclosed seas, driven by a surface buoyancy flux, is presented. It contains two major elements, a hydraulic constraint at the exit contraction and friction in the interior of the main body of the sea; both together determine the vertical structure and magnitudes of the interior flow variables, i.e. velocity and density. Application of the model to the large-scale dynamics of the Red Sea gives results that are not in disagreement with observation once the model is applied, also, to predict the dense outflow from the Gulf of Suez. The latter appears to be the agent responsible for the formation of dense bottom water in this system. Also, the model is reasonably successful in predicting the density of the outflow from the Persian Gulf, and can be applied to any number of other examples of convectively driven flow in long, narrow channels, with or without sills and constrictions at their exits.

  15. Electron-capture Isotopes Could Constrain Cosmic-Ray Propagation Models

    Science.gov (United States)

    Benyamin, David; Shaviv, Nir J.; Piran, Tsvi

    2017-12-01

    Electron capture (EC) isotopes are known to provide constraints on the low-energy behavior of cosmic rays (CRs), such as reacceleration. Here, we study the EC isotopes within the framework of the dynamic spiral-arms CR propagation model in which most of the CR sources reside in the galactic spiral arms. The model was previously used to explain the B/C and sub-Fe/Fe ratios. We show that the known inconsistency between the 49Ti/49V and 51V/51Cr ratios remains also in the spiral-arms model. On the other hand, unlike the general wisdom that says the isotope ratios depend primarily on reacceleration, we find here that the ratio also depends on the halo size (Z h) and, in spiral-arms models, also on the time since the last spiral-arm passage ({τ }{arm}). Namely, EC isotopes can, in principle, provide interesting constraints on the diffusion geometry. However, with the present uncertainties in the lab measurements of both the electron attachment rate and the fragmentation cross sections, no meaningful constraint can be placed.

  16. Using expert knowledge of the hydrological system to constrain multi-objective calibration of SWAT models

    Science.gov (United States)

    The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...

  17. Constraining biogenic silica dissolution in marine sediments: a comparison between diagenetic models and experimental dissolution rates

    NARCIS (Netherlands)

    Khalil, K.; Rabouille, C.; Gallinari, M.; Soetaert, K.E.R.; DeMaster, D.J.; Ragueneau, O.

    2007-01-01

    The processes controlling preservation and recycling of particulate biogenic silica in sediments must be understood in order to calculate oceanic silica mass balances. The new contribution of this work is the coupled use of advanced models including reprecipitation and different phases of biogenic

  18. Effects of time-varying β in SNLS3 on constraining interacting dark energy models

    International Nuclear Information System (INIS)

    Wang, Shuang; Wang, Yong-Zhen; Geng, Jia-Jia; Zhang, Xin

    2014-01-01

    It has been found that, for the Supernova Legacy Survey three-year (SNLS3) data, there is strong evidence for the redshift evolution of the color-luminosity parameter β. In this paper, adopting the w-cold-dark-matter (wCDM) model and considering its interacting extensions (with three kinds of interaction between dark sectors), we explore the evolution of β and its effects on parameter estimation. In addition to the SNLS3 data, we also use the latest Planck distance priors data, the galaxy clustering data extracted from sloan digital sky survey data release 7 and baryon oscillation spectroscopic survey, as well as the direct measurement of Hubble constant H 0 from the Hubble Space Telescope observation. We find that, for all the interacting dark energy (IDE) models, adding a parameter of β can reduce χ 2 by ∝34, indicating that a constant β is ruled out at 5.8σ confidence level. Furthermore, it is found that varying β can significantly change the fitting results of various cosmological parameters: for all the dark energy models considered in this paper, varying β yields a larger fractional CDM densities Ω c0 and a larger equation of state w; on the other side, varying β yields a smaller reduced Hubble constant h for the wCDM model, but it has no impact on h for the three IDE models. This implies that there is a degeneracy between h and coupling parameter γ. Our work shows that the evolution of β is insensitive to the interaction between dark sectors, and then highlights the importance of considering β's evolution in the cosmology fits. (orig.)

  19. Stroke type differentiation using spectrally constrained multifrequency EIT: evaluation of feasibility in a realistic head model

    International Nuclear Information System (INIS)

    Malone, Emma; Jehl, Markus; Arridge, Simon; Betcke, Timo; Holder, David

    2014-01-01

    We investigate the application of multifrequency electrical impedance tomography (MFEIT) to imaging the brain in stroke patients. The use of MFEIT could enable early diagnosis and thrombolysis of ischaemic stroke, and therefore improve the outcome of treatment. Recent advances in the imaging methodology suggest that the use of spectral constraints could allow for the reconstruction of a one-shot image. We performed a simulation study to investigate the feasibility of imaging stroke in a head model with realistic conductivities. We introduced increasing levels of modelling errors to test the robustness of the method to the most common sources of artefact. We considered the case of errors in the electrode placement, spectral constraints, and contact impedance. The results indicate that errors in the position and shape of the electrodes can affect image quality, although our imaging method was successful in identifying tissues with sufficiently distinct spectra. (paper)

  20. A hydrodynamical model of Kepler's supernova remnant constrained by x-ray spectra

    International Nuclear Information System (INIS)

    Ballet, J.; Arnaud, M.; Rothinfluo, R.; Chieze, J.P.; Magne, B.

    1988-01-01

    The remnant of the historical supernova observed by Kepler in 1604 was recently observed in x-rays by the EXOSAT satellite up to 10 keV. A strong Fe K emission line around 6.5 keV is readily apparent in the spectrum. From an analysis of the light curve of the SN, reconstructed from historical descriptions, a previous study proposed to classify it as type I. Standard models of SN I based on carbon deflagration of white dwarf predict the synthesis of about 0.5 M circle of iron in the ejecta. Observing the iron line is a crucial check for such models. It has been argued that the light curve of Sn II-L is very similar to that of SN I and that the original observations are compatible with either type. In view of this uncertainty the authors have run a hydrodynamics-ionization code for both SN II and SN I remnants

  1. Constraining the thermal conditions of impact environments through integrated low-temperature thermochronometry and numerical modeling

    Science.gov (United States)

    Kelly, N. M.; Marchi, S.; Mojzsis, S. J.; Flowers, R. M.; Metcalf, J. R.; Bottke, W. F., Jr.

    2017-12-01

    Impacts have a significant physical and chemical influence on the surface conditions of a planet. The cratering record is used to understand a wide array of impact processes, such as the evolution of the impact flux through time. However, the relationship between impactor size and a resulting impact crater remains controversial (e.g., Bottke et al., 2016). Likewise, small variations in the impact velocity are known to significantly affect the thermal-mechanical disturbances in the aftermath of a collision. Development of more robust numerical models for impact cratering has implications for how we evaluate the disruptive capabilities of impact events, including the extent and duration of thermal anomalies, the volume of ejected material, and the resulting landscape of impacted environments. To address uncertainties in crater scaling relationships, we present an approach and methodology that integrates numerical modeling of the thermal evolution of terrestrial impact craters with low-temperature, (U-Th)/He thermochronometry. The approach uses time-temperature (t-T) paths of crust within an impact crater, generated from numerical simulations of an impact. These t-T paths are then used in forward models to predict the resetting behavior of (U-Th)/He ages in the mineral chronometers apatite and zircon. Differences between the predicted and measured (U-Th)/He ages from a modeled terrestrial impact crater can then be used to evaluate parameters in the original numerical simulations, and refine the crater scaling relationships. We expect our methodology to additionally inform our interpretation of impact products, such as lunar impact breccias and meteorites, providing robust constraints on their thermal histories. In addition, the method is ideal for sample return mission planning - robust "prediction" of ages we expect from a given impact environment enhances our ability to target sampling sites on the Moon, Mars or other solar system bodies where impacts have strongly

  2. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    Directory of Open Access Journals (Sweden)

    J. H. Nienhuis

    2017-09-01

    Full Text Available The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal dynamics or allogenic (external forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  3. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  4. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.; Held, Isaac M.; Stenchikov, Georgiy L.; Zeng, Fanrong; Horowitz, Larry W.

    2014-01-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  5. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.

    2014-10-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  6. Maximizing time from the constraining European Working Time Directive (EWTD): The Heidelberg New Working Time Model.

    Science.gov (United States)

    Schimmack, Simon; Hinz, Ulf; Wagner, Andreas; Schmidt, Thomas; Strothmann, Hendrik; Büchler, Markus W; Schmitz-Winnenthal, Hubertus

    2014-01-01

    The introduction of the European Working Time Directive (EWTD) has greatly reduced training hours of surgical residents, which translates into 30% less surgical and clinical experience. Such a dramatic drop in attendance has serious implications such compromised quality of medical care. As the surgical department of the University of Heidelberg, our goal was to establish a model that was compliant with the EWTD while avoiding reduction in quality of patient care and surgical training. We first performed workload analyses and performance statistics for all working areas of our department (operation theater, emergency room, specialized consultations, surgical wards and on-call duties) using personal interviews, time cards, medical documentation software as well as data of the financial- and personnel-controlling sector of our administration. Using that information, we specifically designed an EWTD-compatible work model and implemented it. Surgical wards and operating rooms (ORs) were not compliant with the EWTD. Between 5 pm and 8 pm, three ORs were still operating two-thirds of the time. By creating an extended work shift (7:30 am-7:30 pm), we effectively reduced the workload to less than 49% from 4 pm and 8 am, allowing the combination of an eight-hour working day with a 16-hour on call duty; thus, maximizing surgical resident training and ensuring patient continuity of care while maintaining EDTW guidelines. A precise workload analysis is the key to success. The Heidelberg New Working Time Model provides a legal model, which, by avoiding rotating work shifts, assures quality of patient care and surgical training.

  7. Modeling and analysis of strategic forward contracting in transmission constrained power markets

    International Nuclear Information System (INIS)

    Yu, C.W.; Chung, T.S.; Zhang, S.H.; Wang, X.

    2010-01-01

    Taking the effects of transmission network into account, strategic forward contracting induced by the interaction of generation firms' strategies in the spot and forward markets is investigated. A two-stage game model is proposed to describe generation firms' strategic forward contracting and spot market competition. In the spot market, generation firms behave strategically by submitting bids at their nodes in a form of linear supply function (LSF) and there are arbitrageurs who buy and resell power at different nodes where price differences exceed the costs of transmission. The owner of the grid is assumed to ration limited transmission line capacity to maximize the value of the transmission services in the spot market. The Cournot-type competition is assumed for the strategic forward contract market. This two-stage model is formulated as an equilibrium problem with equilibrium constraints (EPEC); in which each firm's optimization problem in the forward market is a mathematical program with equilibrium constraints (MPEC) and parameter-dependent spot market equilibrium as the inner problem. A nonlinear complementarity method is employed to solve this EPEC model. (author)

  8. Constraining volcanic inflation at Three Sisters Volcanic Field in Oregon, USA, through microgravity and deformation modeling

    Science.gov (United States)

    Zurek, Jeffrey; William-Jones, Glyn; Johnson, Dan; Eggers, Al

    2012-10-01

    Microgravity data were collected between 2002 and 2009 at the Three Sisters Volcanic Complex, Oregon, to investigate the causes of an ongoing deformation event west of South Sister volcano. Three different conceptual models have been proposed as the causal mechanism for the deformation event: (1) hydraulic uplift due to continual injection of magma at depth, (2) pressurization of hydrothermal systems and (3) viscoelastic response to an initial pressurization at depth. The gravitational effect of continual magma injection was modeled to be 20 to 33 μGal at the center of the deformation field with volumes based on previous deformation studies. The gravity time series, however, did not detect a mass increase suggesting that a viscoelactic response of the crust is the most likely cause for the deformation from 2002 to 2009. The crust, deeper than 3 km, in the Three Sisters region was modeled as a Maxwell viscoelastic material and the results suggest a dynamic viscosity between 1018 to 5 × 1019 Pa s. This low crustal viscosity suggests that magma emplacement or stall depth is controlled by density and not the brittle ductile transition zone. Furthermore, these crustal properties and the observed geochemical composition gaps at Three Sisters can be best explained by different melt sources and limited magma mixing rather than fractional crystallization. More generally, low intrusion rates, low crustal viscosity, and multiple melt sources could also explain the whole rock compositional gaps observed at other arc volcanoes.

  9. Modeling and Economic Analysis of Power Grid Operations in a Water Constrained System

    Science.gov (United States)

    Zhou, Z.; Xia, Y.; Veselka, T.; Yan, E.; Betrie, G.; Qiu, F.

    2016-12-01

    The power sector is the largest water user in the United States. Depending on the cooling technology employed at a facility, steam-electric power stations withdrawal and consume large amounts of water for each megawatt hour of electricity generated. The amounts are dependent on many factors, including ambient air and water temperatures, cooling technology, etc. Water demands from most economic sectors are typically highest during summertime. For most systems, this coincides with peak electricity demand and consequently a high demand for thermal power plant cooling water. Supplies however are sometimes limited due to seasonal precipitation fluctuations including sporadic droughts that lead to water scarcity. When this occurs there is an impact on both unit commitments and the real-time dispatch. In this work, we model the cooling efficiency of several different types of thermal power generation technologies as a function of power output level and daily temperature profiles. Unit specific relationships are then integrated in a power grid operational model that minimizes total grid production cost while reliably meeting hourly loads. Grid operation is subject to power plant physical constraints, transmission limitations, water availability and environmental constraints such as power plant water exit temperature limits. The model is applied to a standard IEEE-118 bus system under various water availability scenarios. Results show that water availability has a significant impact on power grid economics.

  10. Coupling geophysical investigation with hydrothermal modeling to constrain the enthalpy classification of a potential geothermal resource.

    Science.gov (United States)

    White, Jeremy T.; Karakhanian, Arkadi; Connor, Chuck; Connor, Laura; Hughes, Joseph D.; Malservisi, Rocco; Wetmore, Paul

    2015-01-01

    An appreciable challenge in volcanology and geothermal resource development is to understand the relationships between volcanic systems and low-enthalpy geothermal resources. The enthalpy of an undeveloped geothermal resource in the Karckar region of Armenia is investigated by coupling geophysical and hydrothermal modeling. The results of 3-dimensional inversion of gravity data provide key inputs into a hydrothermal circulation model of the system and associated hot springs, which is used to evaluate possible geothermal system configurations. Hydraulic and thermal properties are specified using maximum a priori estimates. Limited constraints provided by temperature data collected from an existing down-gradient borehole indicate that the geothermal system can most likely be classified as low-enthalpy and liquid dominated. We find the heat source for the system is likely cooling quartz monzonite intrusions in the shallow subsurface and that meteoric recharge in the pull-apart basin circulates to depth, rises along basin-bounding faults and discharges at the hot springs. While other combinations of subsurface properties and geothermal system configurations may fit the temperature distribution equally well, we demonstrate that the low-enthalpy system is reasonably explained based largely on interpretation of surface geophysical data and relatively simple models.

  11. Constraining the Timescales of Rehydration in Nominally Anhydrous Minerals Using 3D Numerical Diffusion Models

    Science.gov (United States)

    Lynn, K. J.; Warren, J. M.

    2017-12-01

    Nominally anhydrous minerals (NAMs) are important for characterizing deep-Earth water reservoirs, but the water contents of olivine (ol), orthopyroxene (opx), and clinopyroxene (cpx) in peridotites generally do not reflect mantle equilibrium conditions. Ol is typically "dry" and decoupled from H in cpx and opx, which is inconsistent with models of partial melting and/or diffusive loss of H during upwelling beneath mid-ocean ridges. The rehydration of mantle pyroxenes via late-stage re-fertilization has been invoked to explain their relatively high water contents. Here, we use sophisticated 3D diffusion models (after Shea et al., 2015, Am Min) of H in ol, opx, and cpx to investigate the timescales of rehydration across a range of conditions relevant for melt-rock interaction and serpentinization of peridotites. Numerical crystals with 1 mm c-axis lengths and realistic crystal morphologies are modeled using recent H diffusivities that account for compositional variation and diffusion anisotropy. Models were run over timescales of minutes to millions of years and temperatures from 300 to 1200°C. Our 3D models show that, at the high-T end of the range, H concentrations in the cores of NAMs are partially re-equilibrated in as little as a few minutes, and completely re-equilibrated within hours to weeks. At low-T (300°C), serpentinization can induce considerable diffusion in cpx and opx. H contents are 30% re-equilibrated after continuous exposure to hydrothermal fluids for 102 and 105 years, respectively, which is inconsistent with previous interpretations that there is no effect on H in opx under similar conditions. Ol is unaffected after 1 Myr due to the slower diffusivity of the proton-vacancy mechanism at 300°C (2-4 log units lower than for opx). In the middle of the T range (700-1000°C), rehydration of opx and cpx occurs over hours to days, while ol is somewhat slower to respond (days to weeks), potentially allowing the decoupling observed in natural samples to

  12. Present mantle flow in North China Craton constrained by seismic anisotropy and numerical modelling

    Science.gov (United States)

    Qu, W.; Guo, Z.; Zhang, H.; Chen, Y. J.

    2017-12-01

    North China Carton (NCC) has undergone complicated geodynamic processes during the Cenozoic, including the westward subduction of the Pacific plate to its east and the collision of the India-Eurasia plates to its southwest. Shear wave splitting measurements in NCC reveal distinct seismic anisotropy patterns at different tectonic blocks, that is, the predominantly NW-SE trending alignment of fast directions in the western NCC and eastern NCC, weak anisotropy within the Ordos block, and N-S fast polarization beneath the Trans-North China Orogen (TNCO). To better understand the origin of seismic anisotropy from SKS splitting in NCC, we obtain a high-resolution dynamic model that absorbs multi-geophysical observations and state-of-the-art numerical methods. We calculate the mantle flow using a most updated version of software ASPECT (Kronbichler et al., 2012) with high-resolution temperature and density structures from a recent 3-D thermal-chemical model by Guo et al. (2016). The thermal-chemical model is obtained by multi-observable probabilistic inversion using high-quality surface wave measurements, potential fields, topography, and surface heat flow (Guo et al., 2016). The viscosity is then estimated by combining the dislocation creep, diffusion creep, and plasticity, which is depended on temperature, pressure, and chemical composition. Then we calculate the seismic anisotropy from the shear deformation of mantle flow by DREX, and predict the fast direction and delay time of SKS splitting. We find that when complex boundary conditions are applied, including the far field effects of the deep subduction of Pacific plate and eastward escaping of Tibetan Plateau, our model can successfully predict the observed shear wave splitting patterns. Our model indicates that seismic anisotropy revealed by SKS is primarily resulting from the LPO of olivine due to the shear deformation from asthenospheric flow. We suggest that two branches of mantle flow may contribute to the

  13. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    International Nuclear Information System (INIS)

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-01-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We also synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms

  14. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  15. Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies

    Science.gov (United States)

    Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.

    2014-12-01

    Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support

  16. Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population

    Science.gov (United States)

    Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.

    2012-01-01

    With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars

  17. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng

    2015-11-11

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  18. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng; Hainzl, Sebastian; Mai, Paul Martin

    2015-01-01

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  19. The Next Generation of Numerical Modeling in Mergers- Constraining the Star Formation Law

    Science.gov (United States)

    Chien, Li-Hsin

    2010-09-01

    Spectacular images of colliding galaxies like the "Antennae", taken with the Hubble Space Telescope, have revealed that a burst of star/cluster formation occurs whenever gas-rich galaxies interact. A?The ages and locations of these clusters reveal the interaction history and provide crucial clues to the process of star formation in galaxies. A?We propose to carry out state-of-the-art numerical simulations to model six nearby galaxy mergers {Arp 256, NGC 7469, NGC 4038/39, NGC 520, NGC 2623, NGC 3256}, hence increasing the number with this level of sophistication by a factor of 3. These simulations provide specific predictions for the age and spatial distributions of young star clusters. The comparison between these simulation results and the observations will allow us to answer a number of fundamental questions including: 1} is shock-induced or density-dependent star formation the dominant mechanism; 2} are the demographics {i.e. mass and age distributions} of the clusters in different mergers similar, i.e. "universal", or very different; and 3} will it be necessary to include other mechanisms, e.g., locally triggered star formation, in the models to better match the observations?

  20. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  1. Constraining SUSY models with Fittino using measurements before, with and beyond the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Desch, Klaus; Uhlenbrock, Mathias; Wienemann, Peter [Bonn Univ. (Germany). Physikalisches Inst.

    2009-07-15

    We investigate the constraints on Supersymmetry (SUSY) arising from available precision measurements using a global fit approach.When interpreted within minimal supergravity (mSUGRA), the data provide significant constraints on the masses of supersymmetric particles (sparticles), which are predicted to be light enough for an early discovery at the Large Hadron Collider (LHC). We provide predicted mass spectra including, for the first time, full uncertainty bands. The most stringent constraint is from the measurement of the anomalous magnetic moment of the muon. Using the results of these fits, we investigate to which precision mSUGRA and more general MSSM parameters can be measured by the LHC experiments with three different integrated luminosities for a parameter point which approximately lies in the region preferred by current data. The impact of the already available measurements on these precisions, when combined with LHC data, is also studied. We develop a method to treat ambiguities arising from different interpretations of the data within one model and provide a way to differentiate between values of different digital parameters of a model (e. g. sign({mu}) within mSUGRA). Finally, we show how measurements at a linear collider with up to 1 TeV centre-of-mass energy will help to improve precision by an order of magnitude. (orig.)

  2. DNA and dispersal models highlight constrained connectivity in a migratory marine megavertebrate

    Science.gov (United States)

    Naro-Maciel, Eugenia; Hart, Kristen M.; Cruciata, Rossana; Putman, Nathan F.

    2016-01-01

    Population structure and spatial distribution are fundamentally important fields within ecology, evolution, and conservation biology. To investigate pan-Atlantic connectivity of globally endangered green turtles (Chelonia mydas) from two National Parks in Florida, USA, we applied a multidisciplinary approach comparing genetic analysis and ocean circulation modeling. The Everglades (EP) is a juvenile feeding ground, whereas the Dry Tortugas (DT) is used for courtship, breeding, and feeding by adults and juveniles. We sequenced two mitochondrial segments from 138 turtles sampled there from 2006-2015, and simulated oceanic transport to estimate their origins. Genetic and ocean connectivity data revealed northwestern Atlantic rookeries as the major natal sources, while southern and eastern Atlantic contributions were negligible. However, specific rookery estimates differed between genetic and ocean transport models. The combined analyses suggest that post-hatchling drift via ocean currents poorly explains the distribution of neritic juveniles and adults, but juvenile natal homing and population history likely play important roles. DT and EP were genetically similar to feeding grounds along the southern US coast, but highly differentiated from most other Atlantic groups. Despite expanded mitogenomic analysis and correspondingly increased ability to detect genetic variation, no significant differentiation between DT and EP, or among years, sexes or stages was observed. This first genetic analysis of a North Atlantic green turtle courtship area provides rare data supporting local movements and male philopatry. The study highlights the applications of multidisciplinary approaches for ecological research and conservation.

  3. Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model

    Science.gov (United States)

    Jourdain, L.; Kulawik, S. S.; Worden, H. M.; Pickering, K. E.; Worden, J.; Thompson, A. M.

    2010-01-01

    Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES) instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN) and the Long Range Lightning Detection Network (LRLDN) data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor) climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer) underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.

  4. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    Science.gov (United States)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built

  5. Constraining Methane Emissions from Natural Gas Production in Northeastern Pennsylvania Using Aircraft Observations and Mesoscale Modeling

    Science.gov (United States)

    Barkley, Z.; Davis, K.; Lauvaux, T.; Miles, N.; Richardson, S.; Martins, D. K.; Deng, A.; Cao, Y.; Sweeney, C.; Karion, A.; Smith, M. L.; Kort, E. A.; Schwietzke, S.

    2015-12-01

    Leaks in natural gas infrastructure release methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated fugitive emission rate associated with the production phase varies greatly between studies, hindering our understanding of the natural gas energy efficiency. This study presents a new application of inverse methodology for estimating regional fugitive emission rates from natural gas production. Methane observations across the Marcellus region in northeastern Pennsylvania were obtained during a three week flight campaign in May 2015 performed by a team from the National Oceanic and Atmospheric Administration (NOAA) Global Monitoring Division and the University of Michigan. In addition to these data, CH4 observations were obtained from automobile campaigns during various periods from 2013-2015. An inventory of CH4 emissions was then created for various sources in Pennsylvania, including coalmines, enteric fermentation, industry, waste management, and unconventional and conventional wells. As a first-guess emission rate for natural gas activity, a leakage rate equal to 2% of the natural gas production was emitted at the locations of unconventional wells across PA. These emission rates were coupled to the Weather Research and Forecasting model with the chemistry module (WRF-Chem) and atmospheric CH4 concentration fields at 1km resolution were generated. Projected atmospheric enhancements from WRF-Chem were compared to observations, and the emission rate from unconventional wells was adjusted to minimize errors between observations and simulation. We show that the modeled CH4 plume structures match observed plumes downwind of unconventional wells, providing confidence in the methodology. In all cases, the fugitive emission rate was found to be lower than our first guess. In this initial emission configuration, each well has been assigned the same fugitive emission rate, which can potentially impair our ability to match the observed spatial variability

  6. Dynamical phase diagrams of a love capacity constrained prey-predator model

    Science.gov (United States)

    Simin, P. Toranj; Jafari, Gholam Reza; Ausloos, Marcel; Caiafa, Cesar Federico; Caram, Facundo; Sonubi, Adeyemi; Arcagni, Alberto; Stefani, Silvana

    2018-02-01

    One interesting question in love relationships is: finally, what and when is the end of this love relationship? Using a prey-predator Verhulst-Lotka-Volterra (VLV) model we imply cooperation and competition tendency between people in order to describe a "love dilemma game". We select the most simple but immediately most complex case for studying the set of nonlinear differential equations, i.e. that implying three persons, being at the same time prey and predator. We describe four different scenarios in such a love game containing either a one-way love or a love triangle. Our results show that it is hard to love more than one person simultaneously. Moreover, to love several people simultaneously is an unstable state. We find some condition in which persons tend to have a friendly relationship and love someone in spite of their antagonistic interaction. We demonstrate the dynamics by displaying flow diagrams.

  7. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  9. Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios

    Science.gov (United States)

    Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.

    2015-12-01

    Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.

  10. A transition-constrained discrete hidden Markov model for automatic sleep staging

    Directory of Open Access Journals (Sweden)

    Pan Shing-Tai

    2012-08-01

    Full Text Available Abstract Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG recordings including electroencephalograms (EEGs, electrooculograms (EOGs and electromyograms (EMGs, are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM, and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%, and the least-accurately classified stage was S1 ( Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.

  11. 3Es System Optimization under Uncertainty Using Hybrid Intelligent Algorithm: A Fuzzy Chance-Constrained Programming Model

    Directory of Open Access Journals (Sweden)

    Jiekun Song

    2016-01-01

    Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.

  12. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    Science.gov (United States)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  13. Constraining the Dynamics of Periodic Behavior at Mt. Semeru, Indonesia, Combining Numerical Modeling and Field Measurements of Gas emission

    Science.gov (United States)

    Smekens, J.; Clarke, A. B.; De'Michieli Vitturi, M.; Moore, G. M.

    2012-12-01

    Mt. Semeru is one of the most active explosive volcanoes on the island of Java in Indonesia. The current eruption style consists of small but frequent explosions and/or gas releases (several times a day) accompanied by continuous lava effusion that sporadically produces block-and-ash flows down the SE flank of the volcano. Semeru presents a unique opportunity to investigate the magma ascent conditions that produce this kind of persistent periodic behavior and the coexistence of explosive and effusive eruptions. In this work we use DOMEFLOW, a 1.5D transient isothermal numerical model, to investigate the dynamics of lava extrusion at Semeru. Petrologic observations from tephra and ballistic samples collected at the summit help us constrain the initial conditions of the system. Preliminary model runs produced periodic lava extrusion and pulses of gas release at the vent, with a cycle period on the order of hours, even though a steady magma supply rate was prescribed at the bottom of the conduit. Enhanced shallow permeability implemented in the model appears to create a dense plug in the shallow subsurface, which in turn plays a critical role in creating and controlling the observed periodic behavior. We measured SO2 fluxes just above the vent, using a custom UV imaging system. The device consists of two high-sensitivity CCD cameras with narrow UV filters centered at 310 and 330 nm, and a USB2000+ spectrometer for calibration and distance correction. The method produces high-frequency flux series with an accurate determination of the wind speed and plume geometry. The model results, when combined with gas measurements, and measurements of sulfur in both the groundmass and melt inclusions in eruptive products, could be used to create a volatile budget of the system. Furthermore, a well-calibrated model of the system will ultimately allow the characteristic periodicity and corresponding gas flux to be used as a proxy for magma supply rate.

  14. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    Science.gov (United States)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-10-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength for BB aerosol sources. Our previous work shows that to first order, satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the smoke source strength. We now refine the satellite-snapshot method and investigate where applying simple multiplicative emission adjustment factors alone to the widely used Global Fire Emission Database version 3 emission inventory can achieve regional-scale consistency between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport model. The model and satellite AOD are compared globally, over a set of BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. Regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. We refine our approach to address physically based limitations of our earlier work (1) by expanding the number of fire cases from 124 to almost 900, (2) by using scaled reanalysis-model simulations to fill missing AOD retrievals in the MODIS observations, (3) by distinguishing the BB components of the total aerosol load from background aerosol in the near-source regions, and (4) by including emissions from fires too small to be identified explicitly in the satellite observations. The small-fire emission adjustment shows the complimentary nature of correcting for source strength and adding geographically distinct missing sources. Our analysis indicates that the method works best for fire cases where the BB fraction of total AOD is high, primarily evergreen or deciduous forests. In heavily polluted or agricultural burning regions, where smoke and background AOD values tend to be comparable, this approach

  15. Use of stratigraphic models as soft information to constrain stochastic modeling of rock properties: Development of the GSLIB-Lynx integration module

    International Nuclear Information System (INIS)

    Cromer, M.V.; Rautman, C.A.

    1995-10-01

    Rock properties in volcanic units at Yucca Mountain are controlled largely by relatively deterministic geologic processes related to the emplacement, cooling, and alteration history of the tuffaceous lithologic sequence. Differences in the lithologic character of the rocks have been used to subdivide the rock sequence into stratigraphic units, and the deterministic nature of the processes responsible for the character of the different units can be used to infer the rock material properties likely to exist in unsampled regions. This report proposes a quantitative, theoretically justified method of integrating interpretive geometric models, showing the three-dimensional distribution of different stratigraphic units, with numerical stochastic simulation techniques drawn from geostatistics. This integration of soft, constraining geologic information with hard, quantitative measurements of various material properties can produce geologically reasonable, spatially correlated models of rock properties that are free from stochastic artifacts for use in subsequent physical-process modeling, such as the numerical representation of ground-water flow and radionuclide transport. Prototype modeling conducted using the GSLIB-Lynx Integration Module computer program, known as GLINTMOD, has successfully demonstrated the proposed integration technique. The method involves the selection of stratigraphic-unit-specific material-property expected values that are then used to constrain the probability function from which a material property of interest at an unsampled location is simulated

  16. Constraining the heat flux between Enceladus’ tiger stripes: numerical modeling of funiscular plains formation

    Science.gov (United States)

    Bland, Michael T.; McKinnon, William B; Schenk, Paul M.

    2015-01-01

    The Cassini spacecraft’s Composite Infrared Spectrometer (CIRS) has observed at least 5 GW of thermal emission at Enceladus’ south pole. The vast majority of this emission is localized on the four long, parallel, evenly-spaced fractures dubbed tiger stripes. However, the thermal emission from regions between the tiger stripes has not been determined. These spatially localized regions have a unique morphology consisting of short-wavelength (∼1 km) ridges and troughs with topographic amplitudes of ∼100 m, and a generally ropy appearance that has led to them being referred to as “funiscular terrain.” Previous analysis pursued the hypothesis that the funiscular terrain formed via thin-skinned folding, analogous to that occurring on a pahoehoe flow top (Barr, A.C., Preuss, L.J. [2010]. Icarus 208, 499–503). Here we use finite element modeling of lithospheric shortening to further explore this hypothesis. Our best-case simulations reproduce funiscular-like morphologies, although our simulated fold wavelengths after 10% shortening are 30% longer than those observed. Reproducing short-wavelength folds requires high effective surface temperatures (∼185 K), an ice lithosphere (or high-viscosity layer) with a low thermal conductivity (one-half to one-third that of intact ice or lower), and very high heat fluxes (perhaps as great as 400 mW m−2). These conditions are driven by the requirement that the high-viscosity layer remain extremely thin (≲200 m). Whereas the required conditions are extreme, they can be met if a layer of fine grained plume material 1–10 m thick, or a highly fractured ice layer >50 m thick insulates the surface, and the lithosphere is fractured throughout as well. The source of the necessary heat flux (a factor of two greater than previous estimates) is less obvious. We also present evidence for an unusual color/spectral character of the ropy terrain, possibly related to its unique surface texture. Our simulations demonstrate

  17. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  18. Geologic modeling constrained by seismic and dynamical data; Modelisation geologique contrainte par les donnees sismiques et dynamiques

    Energy Technology Data Exchange (ETDEWEB)

    Pianelo, L.

    2001-09-01

    Matching procedures are often used in reservoir production to improve geological models. In reservoir engineering, history matching leads to update petrophysical parameters in fluid flow simulators to fit the results of the calculations with observed data. In the same line, seismic parameters are inverted to allow the numerical recovery of seismic acquisitions. However, it is well known that these inverse problems are poorly constrained. The idea of this original work is to simultaneous match both the permeability and the acoustic impedance of the reservoir, for an enhancement of the resulting geological model. To do so, both parameters are linked using either observed relations and/or the classic Wyllie (porosity impedance) and Carman-Kozeny (porosity-permeability) relationships. Hence production data are added to the seismic match, and seismic observations are used for the permeability recovery. The work consists in developing numerical prototypes of a 3-D fluid flow simulator and a 3-D seismic acquisition simulator. Then, in implementing the coupled inversion loop of the permeability and the acoustic impedance of the two models. We can hence test our theory on a 3-D realistic case. Comparison of the coupled matching with the two classical ones demonstrates the efficiency of our method. We reduce significantly the number of possible solutions, and then the number of scenarios. In addition to that, the augmentation of information leads to a natural improvement of the obtained models, especially in the spatial localization of the permeability contrasts. The improvement is significant, at the same time in the distribution of the two inverted parameters, and in the rapidity of the operation. This work is an important step in a way of data integration, and leads to a better reservoir characterization. This original algorithm could also be useful in reservoir monitoring, history matching and in optimization of production. This new and original method is patented and

  19. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  20. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    International Nuclear Information System (INIS)

    Xiao, Di; Dai, Zi-Gao

    2017-01-01

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  1. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Di; Dai, Zi-Gao, E-mail: dzg@nju.edu.cn [School of Astronomy and Space Science, Nanjing University, Nanjing 210093 (China)

    2017-09-10

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  2. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    Science.gov (United States)

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  3. Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging Data to Constrain a Positron Emission Tomography Kinetic Model: Theory and Simulations

    Directory of Open Access Journals (Sweden)

    Jacob U. Fluckiger

    2013-01-01

    Full Text Available We show how dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI data can constrain a compartmental model for analyzing dynamic positron emission tomography (PET data. We first develop the theory that enables the use of DCE-MRI data to separate whole tissue time activity curves (TACs available from dynamic PET data into individual TACs associated with the blood space, the extravascular-extracellular space (EES, and the extravascular-intracellular space (EIS. Then we simulate whole tissue TACs over a range of physiologically relevant kinetic parameter values and show that using appropriate DCE-MRI data can separate the PET TAC into the three components with accuracy that is noise dependent. The simulations show that accurate blood, EES, and EIS TACs can be obtained as evidenced by concordance correlation coefficients >0.9 between the true and estimated TACs. Additionally, provided that the estimated DCE-MRI parameters are within 10% of their true values, the errors in the PET kinetic parameters are within approximately 20% of their true values. The parameters returned by this approach may provide new information on the transport of a tracer in a variety of dynamic PET studies.

  4. An Interval Fuzzy-Stochastic Chance-Constrained Programming Based Energy-Water Nexus Model for Planning Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Jing Liu

    2017-11-01

    Full Text Available In this study, an interval fuzzy-stochastic chance-constrained programming based energy-water nexus (IFSCP-WEN model is developed for planning electric power system (EPS. The IFSCP-WEN model can tackle uncertainties expressed as possibility and probability distributions, as well as interval values. Different credibility (i.e., γ levels and probability (i.e., qi levels are set to reflect relationships among water supply, electricity generation, system cost, and constraint-violation risk. Results reveal that different γ and qi levels can lead to a changed system cost, imported electricity, electricity generation, and water supply. Results also disclose that the study EPS would tend to the transition from coal-dominated into clean energy-dominated. Gas-fired would be the main electric utility to supply electricity at the end of the planning horizon, occupying [28.47, 30.34]% (where 28.47% and 30.34% present the lower bound and the upper bound of interval value, respectively of the total electricity generation. Correspondingly, water allocated to gas-fired would reach the highest, occupying [33.92, 34.72]% of total water supply. Surface water would be the main water source, accounting for more than [40.96, 43.44]% of the total water supply. The ratio of recycled water to total water supply would increase by about [11.37, 14.85]%. Results of the IFSCP-WEN model present its potential for sustainable EPS planning by co-optimizing energy and water resources.

  5. Constraining drivers of basin exhumation in the Molasse Basin by combining low-temperature thermochronology, thermal history and kinematic modeling

    Science.gov (United States)

    Luijendijk, Elco; von Hagke, Christoph; Hindle, David

    2017-04-01

    Due to a wealth of geological and thermochronology data the northern foreland basin of the European Alps is an ideal natural laboratory for understanding the dynamics of foreland basins and their interaction with surface and geodynamic processes. The northern foreland basin of the Alps has been exhumed since the Miocene. The timing, rate and cause of this phase of exhumation are still enigmatic. We compile all available thermochronology and organic maturity data and use a new thermal history model, PyBasin, to quantify the rate and timing of exhumation that can explain these data. In addition we quantify the amount of tectonic exhumation using a new kinematic model for the part of the basin that is passively moved above the detachment of the Jura Mountains. Our results show that the vitrinite reflectance, apatite fission track data and cooling rates show no clear difference between the thrusted and folded part of the foreland basin and the undeformed part of the foreland basin. The undeformed plateau Molasse shows a high rate of cooling during the Neogene of 40 to 100 °C, which is equal to >1.0 km of exhumation. Calculated rates of exhumation suggest that drainage reorganization can only explain a small part of the observed exhumation and cooling. Similarly, tectonic transport over a detachment ramp cannot explain the magnitude, timing and wavelength of the observed cooling signal. We conclude that the observed cooling rates suggest large wavelength exhumation that is probably caused by lithospheric-scale processes. In contrast to previous studies we find that the timing of exhumation is poorly constrained. Uncertainty analysis shows that models with timing starting as early as 12 Ma or as late as 2 Ma can all explain the observed data.

  6. p53 constrains progression to anaplastic thyroid carcinoma in a Braf-mutant mouse model of papillary thyroid cancer

    Science.gov (United States)

    McFadden, David G.; Vernon, Amanda; Santiago, Philip M.; Martinez-McFaline, Raul; Bhutkar, Arjun; Crowley, Denise M.; McMahon, Martin; Sadow, Peter M.; Jacks, Tyler

    2014-01-01

    Anaplastic thyroid carcinoma (ATC) has among the worst prognoses of any solid malignancy. The low incidence of the disease has in part precluded systematic clinical trials and tissue collection, and there has been little progress in developing effective therapies. v-raf murine sarcoma viral oncogene homolog B (BRAF) and tumor protein p53 (TP53) mutations cooccur in a high proportion of ATCs, particularly those associated with a precursor papillary thyroid carcinoma (PTC). To develop an adult-onset model of BRAF-mutant ATC, we generated a thyroid-specific CreER transgenic mouse. We used a Cre-regulated BrafV600E mouse and a conditional Trp53 allelic series to demonstrate that p53 constrains progression from PTC to ATC. Gene expression and immunohistochemical analyses of murine tumors identified the cardinal features of human ATC including loss of differentiation, local invasion, distant metastasis, and rapid lethality. We used small-animal ultrasound imaging to monitor autochthonous tumors and showed that treatment with the selective BRAF inhibitor PLX4720 improved survival but did not lead to tumor regression or suppress signaling through the MAPK pathway. The combination of PLX4720 and the mapk/Erk kinase (MEK) inhibitor PD0325901 more completely suppressed MAPK pathway activation in mouse and human ATC cell lines and improved the structural response and survival of ATC-bearing animals. This model expands the limited repertoire of autochthonous models of clinically aggressive thyroid cancer, and these data suggest that small-molecule MAPK pathway inhibitors hold clinical promise in the treatment of advanced thyroid carcinoma. PMID:24711431

  7. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles

  8. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  9. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Evaluating terrestrial water storage variations from regionally constrained GRACE mascon data and hydrological models over Southern Africa – Preliminary results

    DEFF Research Database (Denmark)

    Krogh, Pernille Engelbredt; Andersen, Ole Baltazar; Michailovsky, Claire Irene B.

    2010-01-01

    ). In this paper we explore an experimental set of regionally constrained mascon blocks over Southern Africa where a system of 1.25° × 1.5° and 1.5° × 1.5° blocks has been designed. The blocks are divided into hydrological regions based on drainage patterns of the largest river basins, and are constrained...... Malawi with water level from altimetry. Results show that weak constraints across regions in addition to intra-regional constraints are necessary, to reach reasonable mass variations....

  11. Modeling biomass burning over the South, South East and East Asian Monsoon regions using a new, satellite constrained approach

    Science.gov (United States)

    Lan, R.; Cohen, J. B.

    2017-12-01

    Biomass burning over the South, South East and East Asian Monsoon regions, is a crucial contributor to the total local aerosol loading. Furthermore, the impact of the ITCZ, and Monsoonal circulation patterns coupled with complex topography also have a prominent impact on the aerosol loading throughout much of the Northern Hemisphere. However, at the present time, biomass burning emissions are highly underestimated over this region, in part due to under-reported emissions in space and time, and in part due to an incomplete understanding of the physics and chemistry of the aerosols emitted in fires and formed downwind from them. Hence, a better understanding of the four-dimensional source distribution, plume rise, and in-situ processing, in particular in regions with significant quantities of urban air pollutants, is essential to advance our knowledge of this problem. This work uses a new modeling methodology based on the simultaneous constraints of measured AOD and some trace gasses over the region. The results of the 4-D constrained emissions are further expanded upon using different fire plume height rise and in-situ processing assumptions. Comparisons between the results and additional ground-based and remotely sensed measurements, including AERONET, CALIOP, and NOAA and other ground networks are included. The end results reveal a trio of insights into the nonlinear processes most-important to understand the impacts of biomass burning in this part of the world. Model-measurement comparisons are found to be consistent during the typical burning years of 2016. First, the model performs better under the new emissions representations, than it does using any of the standard hotspot based approaches currently employed by the community. Second, long range transport and mixing between the boundary layer and free troposphere contribute to the spatial-temporal variations. Third, we indicate some source regions that are new, either because of increased urbanization, or of

  12. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  13. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    Science.gov (United States)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  14. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    Science.gov (United States)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  15. Constraining the Timing of Lobate Debris Apron Emplacement at Martian Mid-Latitudes Using a Numerical Model of Ice Flow

    Science.gov (United States)

    Parsons, R. A.; Nimmo, F.

    2010-03-01

    SHARAD observations constrain the thickness and dust content of lobate debris aprons (LDAs). Simulations of dust-free ice-sheet flow over a flat surface at 205 K for 10-100 m.y. give LDA lengths and thicknesses that are consistent with observations.

  16. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  17. An Architecturally Constrained Model of Random Number Generation and its Application to Modelling the Effect of Generation Rate

    Directory of Open Access Journals (Sweden)

    Nicholas J. Sexton

    2014-07-01

    Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.

  18. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    Science.gov (United States)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  19. Hybrid Active/Passive Control of Sound Radiation from Panels with Constrained Layer Damping and Model Predictive Feedback Control

    Science.gov (United States)

    Cabell, Randolph H.; Gibbs, Gary P.

    2000-01-01

    make the controller adaptive. For example, a mathematical model of the plant could be periodically updated as the plant changes, and the feedback gains recomputed from the updated model. To be practical, this approach requires a simple plant model that can be updated quickly with reasonable computational requirements. A recent paper by the authors discussed one way to simplify a feedback controller, by reducing the number of actuators and sensors needed for good performance. The work was done on a tensioned aircraft-style panel excited on one side by TBL flow in a low speed wind tunnel. Actuation was provided by a piezoelectric (PZT) actuator mounted on the center of the panel. For sensing, the responses of four accelerometers, positioned to approximate the response of the first radiation mode of the panel, were summed and fed back through the controller. This single input-single output topology was found to have nearly the same noise reduction performance as a controller with fifteen accelerometers and three PZT patches. This paper extends the previous results by looking at how constrained layer damping (CLD) on a panel can be used to enhance the performance of the feedback controller thus providing a more robust and efficient hybrid active/passive system. The eventual goal is to use the CLD to reduce sound radiation at high frequencies, then implement a very simple, reduced order, low sample rate adaptive controller to attenuate sound radiation at low frequencies. Additionally this added damping smoothes phase transitions over the bandwidth which promotes robustness to natural frequency shifts. Experiments were conducted in a transmission loss facility on a clamped-clamped aluminum panel driven on one side by a loudspeaker. A generalized predictive control (GPC) algorithm, which is suited to online adaptation of its parameters, was used in single input-single output and multiple input-single output configurations. Because this was a preliminary look at the potential

  20. Modeling the Land Use/Cover Change in an Arid Region Oasis City Constrained by Water Resource and Environmental Policy Change using Cellular Automata Model

    Science.gov (United States)

    Hu, X.; Li, X.; Lu, L.

    2017-12-01

    Land use/cover change (LUCC) is an important subject in the research of global environmental change and sustainable development, while spatial simulation on land use/cover change is one of the key content of LUCC and is also difficult due to the complexity of the system. The cellular automata (CA) model had an irreplaceable role in simulating of land use/cover change process due to the powerful spatial computing power. However, the majority of current CA land use/cover models were binary-state model that could not provide more general information about the overall spatial pattern of land use/cover change. Here, a multi-state logistic-regression-based Markov cellular automata (MLRMCA) model and a multi-state artificial-neural-network-based Markov cellular automata (MANNMCA) model were developed and were used to simulate complex land use/cover evolutionary process in an arid region oasis city constrained by water resource and environmental policy change, the Zhangye city during the period of 1990-2010. The results indicated that the MANNMCA model was superior to MLRMCA model in simulated accuracy. These indicated that by combining the artificial neural network with CA could more effectively capture the complex relationships between the land use/cover change and a set of spatial variables. Although the MLRMCA model were also some advantages, the MANNMCA model was more appropriate for simulating complex land use/cover dynamics. The two proposed models were effective and reliable, and could reflect the spatial evolution of regional land use/cover changes. These have also potential implications for the impact assessment of water resources, ecological restoration, and the sustainable urban development in arid areas.

  1. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    Science.gov (United States)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions

  2. Modelling the flooding capacity of a Polish Carpathian river: A comparison of constrained and free channel conditions

    Science.gov (United States)

    Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna

    2016-11-01

    The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep

  3. Using finite element modelling to examine the flow process and temperature evolution in HPT under different constraining conditions

    International Nuclear Information System (INIS)

    Pereira, P H R; Langdon, T G; Figueiredo, R B; Cetlin, P R

    2014-01-01

    High-pressure torsion (HPT) is a metal-working technique used to impose severe plastic deformation into disc-shaped samples under high hydrostatic pressures. Different HPT facilities have been developed and they may be divided into three distinct categories depending upon the configuration of the anvils and the restriction imposed on the lateral flow of the samples. In the present paper, finite element simulations were performed to compare the flow process, temperature, strain and hydrostatic stress distributions under unconstrained, quasi-constrained and constrained conditions. It is shown there are distinct strain distributions in the samples depending on the facility configurations and a similar trend in the temperature rise of the HPT workpieces

  4. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios.

    Science.gov (United States)

    Yen, Haw; White, Michael J; Arnold, Jeffrey G; Keitzer, S Conor; Johnson, Mari-Vaughn V; Atwood, Jay D; Daggupati, Prasad; Herbert, Matthew E; Sowa, Scott P; Ludsin, Stuart A; Robertson, Dale M; Srinivasan, Raghavan; Rewa, Charles A

    2016-11-01

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios

    Science.gov (United States)

    Yen, Haw; White, Michael J.; Arnold, Jeffrey G.; Keitzer, S. Conor; Johnson, Mari-Vaughn V; Atwood, Jay D.; Daggupati, Prasad; Herbert, Matthew E.; Sowa, Scott P.; Ludsin, Stuart A.; Robertson, Dale M.; Srinivasan, Raghavan; Rewa, Charles A.

    2016-01-01

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT2012) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation.

  6. Modelling carbon and water exchange of a grazed pasture in New Zealand constrained by eddy covariance measurements.

    Science.gov (United States)

    Kirschbaum, Miko U F; Rutledge, Susanna; Kuijper, Isoude A; Mudge, Paul L; Puche, Nicolas; Wall, Aaron M; Roach, Chris G; Schipper, Louis A; Campbell, David I

    2015-04-15

    We used two years of eddy covariance (EC) measurements collected over an intensively grazed dairy pasture to better understand the key drivers of changes in soil organic carbon stocks. Analysing grazing systems with EC measurements poses significant challenges as the respiration from grazing animals can result in large short-term CO2 fluxes. As paddocks are grazed only periodically, EC observations derive from a mosaic of paddocks with very different exchange rates. This violates the assumptions implicit in the use of EC methodology. To test whether these challenges could be overcome, and to develop a tool for wider scenario testing, we compared EC measurements with simulation runs with the detailed ecosystem model CenW 4.1. Simulations were run separately for 26 paddocks around the EC tower and coupled to a footprint analysis to estimate net fluxes at the EC tower. Overall, we obtained good agreement between modelled and measured fluxes, especially for the comparison of evapotranspiration rates, with model efficiency of 0.96 for weekly averaged values of the validation data. For net ecosystem productivity (NEP) comparisons, observations were omitted when cattle grazed the paddocks immediately around the tower. With those points omitted, model efficiencies for weekly averaged values of the validation data were 0.78, 0.67 and 0.54 for daytime, night-time and 24-hour NEP, respectively. While not included for model parameterisation, simulated gross primary production also agreed closely with values inferred from eddy covariance measurements (model efficiency of 0.84 for weekly averages). The study confirmed that CenW simulations could adequately model carbon and water exchange in grazed pastures. It highlighted the critical role of animal respiration for net CO2 fluxes, and showed that EC studies of grazed pastures need to consider the best approach of accounting for this important flux to avoid unbalanced accounting. Copyright © 2015. Published by Elsevier B.V.

  7. Petrologically-constrained thermo-chemical modelling of cratonic upper mantle consistent with elevation, geoid, surface heat flow, seismic surface waves and MT data

    Science.gov (United States)

    Jones, A. G.; Afonso, J. C.

    2015-12-01

    The Earth comprises a single physio-chemical system that we interrogate from its surface and/or from space making observations related to various physical and chemical parameters. A change in one of those parameters affects many of the others; for example a change in velocity is almost always indicative of a concomitant change in density, which results in changes to elevation, gravity and geoid observations. Similarly, a change in oxide chemistry affects almost all physical parameters to a greater or lesser extent. We have now developed sophisticated tools to model/invert data in our individual disciplines to such an extent that we are obtaining high resolution, robust models from our datasets. However, in the vast majority of cases the different datasets are modelled/inverted independently of each other, and often even without considering other data in a qualitative sense. The LitMod framework of Afonso and colleagues presents integrated inversion of geoscientific data to yield thermo-chemical models that are petrologically consistent and constrained. Input data can comprise any combination of elevation, geoid, surface heat flow, seismic surface wave (Rayleigh and Love) data and receiver function data, and MT data. The basis of LitMod is characterization of the upper mantle in terms of five oxides in the CFMAS system and a thermal structure that is conductive to the LAB and convective along the adiabat below the LAB to the 410 km discontinuity. Candidate solutions are chosen from prior distributions of the oxides. For the crust, candidate solutions are chosen from distributions of crustal layering, velocity and density parameters. Those candidate solutions that fit the data within prescribed error limits are kept, and are used to establish broad posterior distributions from which new candidate solutions are chosen. Examples will be shown of application of this approach fitting data from the Kaapvaal Craton in South Africa and the Rae Craton in northern Canada. I

  8. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  9. Evaluation of HOx sources and cycling using measurement-constrained model calculations in a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated ecosystem

    Directory of Open Access Journals (Sweden)

    S. B. Henry

    2013-02-01

    Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2

  10. Behavioural Models of Motor Control and Short-Term Memory

    OpenAIRE

    Imanaka, Kuniyasu; Funase, Kozo; Yamauchi, Masaki

    1995-01-01

    We examined in this review article the behavioural and conceptual models of motor control and short-term memory which have intensively been investigated since the 1970s. First, we reviewed both the dual-storage model of short-term memory in which movement information is stored and a typical model of motor control which emphasizes the importance of efferent factors. We then examined two models of preselection effects: a cognitive model and a cognitive/ efferent model. Following this we reviewe...

  11. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  12. A constrained dispersive optical model for the neutron-nucleus interaction from -80 to +80 MeV for the mass region 27≤A≤32

    International Nuclear Information System (INIS)

    Al-Ohali, M.A.; Howell, C.R.; Tornow, W.; Walter, R.L.

    1995-01-01

    A Constrained Dispersive Optical Model (CDOM) analysis was performed for the neutron-nucleus interaction in the energy domain from -80 to 80 MeV for the three nuclei in the center of the 2s-1d shell nuclei. The CDOM incorporates the dispersion relation which connects the real and imaginary parts of the nuclear mean field. Parameters for the model were derived by fitting the neutron differential elastic cross-section, the total cross-section, and the analyzing power data for 27 Al, 28 Si, and 32 S. The parameters were also adjusted slightly to improve overall agreement to single-particle bound-state energies

  13. Short term load forecasting: two stage modelling

    Directory of Open Access Journals (Sweden)

    SOARES, L. J.

    2009-06-01

    Full Text Available This paper studies the hourly electricity load demand in the area covered by a utility situated in the Seattle, USA, called Puget Sound Power and Light Company. Our proposal is put into proof with the famous dataset from this company. We propose a stochastic model which employs ANN (Artificial Neural Networks to model short-run dynamics and the dependence among adjacent hours. The model proposed treats each hour's load separately as individual single series. This approach avoids modeling the intricate intra-day pattern (load profile displayed by the load, which varies throughout days of the week and seasons. The forecasting performance of the model is evaluated in similiar mode a TLSAR (Two-Level Seasonal Autoregressive model proposed by Soares (2003 using the years of 1995 and 1996 as the holdout sample. Moreover, we conclude that non linearity is present in some series of these data. The model results are analyzed. The experiment shows that our tool can be used to produce load forecasting in tropical climate places.

  14. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    International Nuclear Information System (INIS)

    Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin

    2011-01-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  15. Modelled long term trends of surface ozone over South Africa

    CSIR Research Space (South Africa)

    Naidoo, M

    2011-10-01

    Full Text Available timescale seeks to provide a spatially comprehensive view of trends while also creating a baseline for comparisons with future projections of air quality through the forcing of air quality models with modelled predicted long term meteorology. Previous...

  16. A viable D-term hybrid inflation model

    Science.gov (United States)

    Kadota, Kenji; Kobayashi, Tatsuo; Sumita, Keigo

    2017-11-01

    We propose a new model of the D-term hybrid inflation in the framework of supergravity. Although our model introduces, analogously to the conventional D-term inflation, the inflaton and a pair of scalar fields charged under a U(1) gauge symmetry, we study the logarithmic and exponential dependence on the inflaton field, respectively, for the Kähler and superpotential. This results in a characteristic one-loop scalar potential consisting of linear and exponential terms, which realizes the small-field inflation dominated by the Fayet-Iliopoulos term. With the reasonable values for the coupling coefficients and, in particular, with the U(1) gauge coupling constant comparable to that of the Standard Model, our D-term inflation model can solve the notorious problems in the conventional D-term inflation, namely, the CMB constraints on the spectral index and the generation of cosmic strings.

  17. Merons in a generally covariant model with Gursey term

    International Nuclear Information System (INIS)

    Akdeniz, K.G.; Smailagic, A.

    1982-10-01

    We study meron solutions of the generally covariant and Weyl invariant fermionic model with Gursey term. We find that, due to the presence of this term, merons can exist even without the cosmological constant. This is a new feature compared to previously studied models. (author)

  18. Early cosmology constrained

    Energy Technology Data Exchange (ETDEWEB)

    Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2017-04-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.

  19. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    International Nuclear Information System (INIS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4 m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  20. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    Energy Technology Data Exchange (ETDEWEB)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A. [Institut d' Investigació per a la Gestió Integrada de les Zones Costaneres (IGIC), Universitat Politècnica de València, C/Paranimf 1, 46730 Gandia (Spain); Herrero, A., E-mail: mardid@fis.upv.es, E-mail: ivfeen@upv.es, E-mail: aherrero@mat.upv.es, E-mail: jmmora@fis.upv.es [Institut de Matemàtica Multidisciplinar, Universitat Politècnica de València, Camí de Vera s/n, 46022 València (Spain)

    2017-04-01

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools to test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.

  1. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    KAUST Repository

    Schull, M. A.

    2015-03-11

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  2. A constrained approximation for nuclear barrier penetration and fission

    International Nuclear Information System (INIS)

    Tang, H.H.K.; Negele, J.W.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge

    1983-01-01

    An approximation to the time-dependent mean-field theory for barrier penetration by a nucleus is obtained in terms of constrained Hartree-Fock wave functions and a coherent velocity field. A discrete approximation to the continuum theory suitable for practical numerical calculations is presented and applied to three illustrative models. Potential application of the theory to the study of nuclear fission is discussed. (orig.)

  3. 'Semi-realistic'F-term inflation model building in supergravity

    International Nuclear Information System (INIS)

    Kain, Ben

    2008-01-01

    We describe methods for building 'semi-realistic' models of F-term inflation. By semi-realistic we mean that they are built in, and obey the requirements of, 'semi-realistic' particle physics models. The particle physics models are taken to be effective supergravity theories derived from orbifold compactifications of string theory, and their requirements are taken to be modular invariance, absence of mass terms and stabilization of moduli. We review the particle physics models, their requirements and tools and methods for building inflation models

  4. Constraining dark sectors with monojets and dijets

    International Nuclear Information System (INIS)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai; McCullough, Matthew

    2015-03-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  5. Constraining dark sectors with monojets and dijets

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); McCullough, Matthew [European Organization for Nuclear Research (CERN), Geneva (Switzerland). Theory Div.

    2015-03-15

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  6. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  7. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    International Nuclear Information System (INIS)

    Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen

    2015-01-01

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  8. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jing-Cheng [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Huang, Guohe, E-mail: huang@iseis.org [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Huang, Yuefei [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Zhang, Hua [College of Science and Engineering, Texas A& M University — Corpus Christi, Corpus Christi, TX 78412-5797 (United States); Li, Zhong [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Chen, Qiuwen [Center for Eco-Environmental Research, Nanjing Hydraulics Research Institute, Nanjing 210029 (China)

    2015-08-15

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  9. A phenomenological memristor model for short-term/long-term memory

    International Nuclear Information System (INIS)

    Chen, Ling; Li, Chuandong; Huang, Tingwen; Ahmad, Hafiz Gulfam; Chen, Yiran

    2014-01-01

    Memristor is considered to be a natural electrical synapse because of its distinct memory property and nanoscale. In recent years, more and more similar behaviors are observed between memristors and biological synapse, e.g., short-term memory (STM) and long-term memory (LTM). The traditional mathematical models are unable to capture the new emerging behaviors. In this article, an updated phenomenological model based on the model of the Hewlett–Packard (HP) Labs has been proposed to capture such new behaviors. The new dynamical memristor model with an improved ion diffusion term can emulate the synapse behavior with forgetting effect, and exhibit the transformation between the STM and the LTM. Further, this model can be used in building new type of neural networks with forgetting ability like biological systems, and it is verified by our experiment with Hopfield neural network. - Highlights: • We take the Fick diffusion and the Soret diffusion into account in the ion drift theory. • We develop a new model based on the old HP model. • The new model can describe the forgetting effect and the spike-rate-dependent property of memristor. • The new model can solve the boundary effect of all window functions discussed in [13]. • A new Hopfield neural network with the forgetting ability is built by the new memristor model

  10. Increased accuracy in mineral and hydrogeophysical modelling of HTEM data via detailed description of system transfer function and constrained inversion

    DEFF Research Database (Denmark)

    Viezzoli, Andrea; Christiansen, Anders Vest; Auken, Esben

    This paper aims at providing more insight into the parameters that need to be modelled during inversion of Helicopter TEM data for accurate modelling, both for hydrogeophysical and exploration applications. We use synthetic data to show in details the effect, both in data and in model space...

  11. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...

  12. Using a data-constrained model of home range establishment to predict abundance in spatially heterogeneous habitats.

    Directory of Open Access Journals (Sweden)

    Mark C Vanderwel

    Full Text Available Mechanistic modelling approaches that explicitly translate from individual-scale resource selection to the distribution and abundance of a larger population may be better suited to predicting responses to spatially heterogeneous habitat alteration than commonly-used regression models. We developed an individual-based model of home range establishment that, given a mapped distribution of local habitat values, estimates species abundance by simulating the number and position of viable home ranges that can be maintained across a spatially heterogeneous area. We estimated parameters for this model from data on red-backed vole (Myodes gapperi abundances in 31 boreal forest sites in Ontario, Canada. The home range model had considerably more support from these data than both non-spatial regression models based on the same original habitat variables and a mean-abundance null model. It had nearly equivalent support to a non-spatial regression model that, like the home range model, scaled an aggregate measure of habitat value from local associations with habitat resources. The home range and habitat-value regression models gave similar predictions for vole abundance under simulations of light- and moderate-intensity partial forest harvesting, but the home range model predicted lower abundances than the regression model under high-intensity disturbance. Empirical regression-based approaches for predicting species abundance may overlook processes that affect habitat use by individuals, and often extrapolate poorly to novel habitat conditions. Mechanistic home range models that can be parameterized against abundance data from different habitats permit appropriate scaling from individual- to population-level habitat relationships, and can potentially provide better insights into responses to disturbance.

  13. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2017-01-01

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....

  14. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....

  15. Order-Constrained Reference Priors with Implications for Bayesian Isotonic Regression, Analysis of Covariance and Spatial Models

    Science.gov (United States)

    Gong, Maozhen

    Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.

  16. Constraining models of postglacial rebound using space geodesy: a detailed assessment of model ICE-5G (VM2) and its relatives

    Science.gov (United States)

    Argus, Donald F.; Peltier, W. Richard

    2010-05-01

    Using global positioning system, very long baseline interferometry, satellite laser ranging and Doppler Orbitography and Radiopositioning Integrated by Satellite observations, including the Canadian Base Network and Fennoscandian BIFROST array, we constrain, in models of postglacial rebound, the thickness of the ice sheets as a function of position and time and the viscosity of the mantle as a function of depth. We test model ICE-5G VM2 T90 Rot, which well fits many hundred Holocene relative sea level histories in North America, Europe and worldwide. ICE-5G is the deglaciation history having more ice in western Canada than ICE-4G; VM2 is the mantle viscosity profile having a mean upper mantle viscosity of 0.5 × 1021Pas and a mean uppermost-lower mantle viscosity of 1.6 × 1021Pas T90 is an elastic lithosphere thickness of 90 km; and Rot designates that the model includes (rotational feedback) Earth's response to the wander of the North Pole of Earth's spin axis towards Canada at a speed of ~1° Myr-1. The vertical observations in North America show that, relative to ICE-5G, the Laurentide ice sheet at last glacial maximum (LGM) at ~26 ka was (1) much thinner in southern Manitoba, (2) thinner near Yellowknife (Northwest Territories), (3) thicker in eastern and southern Quebec and (4) thicker along the northern British Columbia-Alberta border, or that ice was unloaded from these areas later (thicker) or earlier (thinner) than in ICE-5G. The data indicate that the western Laurentide ice sheet was intermediate in mass between ICE-5G and ICE-4G. The vertical observations and GRACE gravity data together suggest that the western Laurentide ice sheet was nearly as massive as that in ICE-5G but distributed more broadly across northwestern Canada. VM2 poorly fits the horizontal observations in North America, predicting places along the margins of the Laurentide ice sheet to be moving laterally away from the ice centre at 2 mm yr-1 in ICE-4G and 3 mm yr-1 in ICE-5G, in

  17. Robust self-triggered model predictive control for constrained discrete-time LTI systems based on homothetic tubes

    NARCIS (Netherlands)

    Aydiner, E.; Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2015-01-01

    In this paper we present a robust self-triggered model predictive control (MPC) scheme for discrete-time linear time-invariant systems subject to input and state constraints and additive disturbances. In self-triggered model predictive control, at every sampling instant an optimization problem based

  18. How to constrain multi-objective calibrations using water balance components for an improved realism of model results

    Science.gov (United States)

    Accurate discharge simulation is one of the most common objectives of hydrological modeling studies. However, a good simulation of discharge is not necessarily the result of a realistic simulation of hydrological processes within the catchment. To enhance the realism of model results, we propose an ...

  19. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    International Nuclear Information System (INIS)

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    A quantitative local computed tomography combined with data-constrained modelling has been developed. The method could improve distinctly the spatial resolution and the composition resolution in a sample larger than the field of view, for quantitative characterization of three-dimensional distributions of material compositions and void. Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials

  20. A Team Building Model for Software Engineering Courses Term Projects

    Science.gov (United States)

    Sahin, Yasar Guneri

    2011-01-01

    This paper proposes a new model for team building, which enables teachers to build coherent teams rapidly and fairly for the term projects of software engineering courses. Moreover, the model can also be used to build teams for any type of project, if the team member candidates are students, or if they are inexperienced on a certain subject. The…

  1. Simple model for crop photosynthesis in terms of weather variables ...

    African Journals Online (AJOL)

    A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

  2. Model for expressing leaf photosynthesis in terms of weather variables

    African Journals Online (AJOL)

    A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

  3. The Starobinsky model from superconformal D-term inflation

    International Nuclear Information System (INIS)

    Buchmuller, W.; Domcke, V.; Kamada, K.

    2013-06-01

    We point out that in the large field regime, the recently proposed superconformal D-term inflation model coincides with the Starobinsky model. In this regime, the inflaton field dominates over the Planck mass in the gravitational kinetic term in the Jordan frame. Slow-roll inflation is realized in the large field regime for sufficiently large gauge couplings. The Starobinsky model generally emerges as an effective description of slow-roll inflation if a Jordan frame exists where, for large inflaton field values, the action is scale invariant and the ratio λ of the inflaton self-coupling and the nonminimal coupling to gravity is tiny. The interpretation of this effective coupling is different in different models. In superconformal D-term inflation it is determined by the scale of grand unification, λ∝(Λ GUT /M P ) 4 .

  4. The Starobinsky model from superconformal D-term inflation

    Energy Technology Data Exchange (ETDEWEB)

    Buchmuller, W.; Domcke, V.; Kamada, K.

    2013-06-15

    We point out that in the large field regime, the recently proposed superconformal D-term inflation model coincides with the Starobinsky model. In this regime, the inflaton field dominates over the Planck mass in the gravitational kinetic term in the Jordan frame. Slow-roll inflation is realized in the large field regime for sufficiently large gauge couplings. The Starobinsky model generally emerges as an effective description of slow-roll inflation if a Jordan frame exists where, for large inflaton field values, the action is scale invariant and the ratio {lambda} of the inflaton self-coupling and the nonminimal coupling to gravity is tiny. The interpretation of this effective coupling is different in different models. In superconformal D-term inflation it is determined by the scale of grand unification, {lambda}{proportional_to}({Lambda}{sub GUT}/M{sub P}){sup 4}.

  5. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus; Cescatti, Alessandro; Gitelson, Anatoly A.

    2013-01-01

    variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based

  6. Constraining Controls on the Emplacement of Long Lava Flows on Earth and Mars Through Modeling in ArcGIS

    Science.gov (United States)

    Golder, K.; Burr, D. M.; Tran, L.

    2017-12-01

    Regional volcanic processes shaped many planetary surfaces in the Solar System, often through the emplacement of long, voluminous lava flows. Terrestrial examples of this type of lava flow have been used as analogues for extensive martian flows, including those within the circum-Cerberus outflow channels. This analogy is based on similarities in morphology, extent, and inferred eruptive style between terrestrial and martian flows, which raises the question of how these lava flows appear comparable in size and morphology on different planets. The parameters that influence the areal extent of silicate lavas during emplacement may be categorized as either inherent or external to the lava. The inherent parameters include the lava yield strength, density, composition, water content, crystallinity, exsolved gas content, pressure, and temperature. Each inherent parameter affects the overall viscosity of the lava, and for this work can be considered a subset of the viscosity parameter. External parameters include the effusion rate, total erupted volume, regional slope, and gravity. To investigate which parameter(s) may control(s) the development of long lava flows on Mars, we are applying a computational numerical-modelling to reproduce the observed lava flow morphologies. Using a matrix of boundary conditions in the model enables us to investigate the possible range of emplacement conditions that can yield the observed morphologies. We have constructed the basic model framework in Model Builder within ArcMap, including all governing equations and parameters that we seek to test, and initial implementation and calibration has been performed. The base model is currently capable of generating a lava flow that propagates along a pathway governed by the local topography. At AGU, the results of model calibration using the Eldgá and Laki lava flows in Iceland will be presented, along with the application of the model to lava flows within the Cerberus plains on Mars. We then

  7. CONSTRAINING MODELS OF TWIN-PEAK QUASI-PERIODIC OSCILLATIONS WITH REALISTIC NEUTRON STAR EQUATIONS OF STATE

    Energy Technology Data Exchange (ETDEWEB)

    Török, Gabriel; Goluchová, Katerina; Urbanec, Martin, E-mail: gabriel.torok@gmail.com, E-mail: katka.g@seznam.cz, E-mail: martin.urbanec@physics.cz [Research Centre for Computational Physics and Data Processing, Institute of Physics, Faculty of Philosophy and Science, Silesian University in Opava, Bezručovo nám. 13, CZ-746, 01 Opava (Czech Republic); and others

    2016-12-20

    Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass–angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle–Thorne spacetimes. We show that the application of concrete EoS together with a particular QPO model yields a specific mass–angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set of EoS and discuss their compatibility with the considered QPO models. We conclude that when the NS spin frequency in 4U 1636-53 is close to 580 Hz, we can exclude 51 of the 90 considered combinations of EoS and QPO models. We also discuss additional restrictions that may exclude even more combinations. Namely, 13 EOS are compatible with the observed twin-peak QPOs and the relativistic precession model. However, when considering the low-frequency QPOs and Lense–Thirring precession, only 5 EOS are compatible with the model.

  8. GRAVITATIONAL-WAVE OBSERVATIONS MAY CONSTRAIN GAMMA-RAY BURST MODELS: THE CASE OF GW150914–GBM

    Energy Technology Data Exchange (ETDEWEB)

    Veres, P. [CSPAR, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Preece, R. D. [Dept. of Space Science, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Goldstein, A.; Connaughton, V. [Universities Space Research Association, 320 Sparkman Dr. Huntsville, AL 35806 (United States); Mészáros, P. [Dept. of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Burns, E., E-mail: peter.veres@uah.edu [Physics Dept., University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States)

    2016-08-20

    The possible short gamma-ray burst (GRB) observed by Fermi /GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (∼10{sup −3} cm{sup −3}) and a high Lorentz factor (∼2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.

  9. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    Science.gov (United States)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that

  10. Accounting for disturbance history in models: using remote sensing to constrain carbon and nitrogen pool spin-up.

    Science.gov (United States)

    Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer

    2018-03-24

    Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate. This approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady-state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in

  11. A Parametric Factor Model of the Term Structure of Mortality

    DEFF Research Database (Denmark)

    Haldrup, Niels; Rosenskjold, Carsten Paysen T.

    The prototypical Lee-Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via...... on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson-Siegel term structure model. First, a two-step nonlinear least squares...... procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US...

  12. Models with oscillator terms in noncommutative quantum field theory

    International Nuclear Information System (INIS)

    Kronberger, E.

    2010-01-01

    The main focus of this Ph.D. thesis is on noncommutative models involving oscillator terms in the action. The first one historically is the successful Grosse-Wulkenhaar (G.W.) model which has already been proven to be renormalizable to all orders of perturbation theory. Remarkably it is furthermore capable of solving the Landau ghost problem. In a first step, we have generalized the G.W. model to gauge theories in a very straightforward way, where the action is BRS invariant and exhibits the good damping properties of the scalar theory by using the same propagator, the so-called Mehler kernel. To be able to handle some more involved one-loop graphs we have programmed a powerful Mathematica package, which is capable of analytically computing Feynman graphs with many terms. The result of those investigations is that new terms originally not present in the action arise, which led us to the conclusion that we should better start from a theory where those terms are already built in. Fortunately there is an action containing this complete set of terms. It can be obtained by coupling a gauge field to the scalar field of the G.W. model, integrating out the latter, and thus 'inducing' a gauge theory. Hence the model is called Induced Gauge Theory. Despite the advantage that it is by construction completely gauge invariant, it contains also some unphysical terms linear in the gauge field. Advantageously we could get rid of these terms using a special gauge dedicated to this purpose. Within this gauge we could again establish the Mehler kernel as gauge field propagator. Furthermore we where able to calculate the ghost propagator, which turned out to be very involved. Thus we were able to start with the first few loop computations showing the expected behavior. The next step is to show renormalizability of the model, where some hints towards this direction will also be given. (author) [de

  13. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  14. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.; Kadoura, A.; Valstar, J.; Sun, S.; Hoteit, Ibrahim

    2014-01-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system's processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  15. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.

    2014-03-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system\\'s processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  16. Constraining Dark Sectors with Monojets and Dijets

    CERN Document Server

    Chala, Mikael; McCullough, Matthew; Nardini, Germano; Schmidt-Hoberg, Kai

    2015-01-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simpli...

  17. Long-term predictive capability of erosion models

    Science.gov (United States)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  18. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  19. Robust stability in constrained predictive control through the Youla parameterisations

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2011-01-01

    In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...... for hard input constraints. In more detail, we parameterise the MPC predictions in terms of the primary Youla parameter and use this parameter as the on-line optimisation variable. The uncertainty is parameterised in terms of the dual Youla parameter. Stability can then be guaranteed through small gain...

  20. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X.

    Science.gov (United States)

    Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J

    2015-12-01

    Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Utilizing the Updated Gamma-Ray Bursts and Type Ia Supernovae to Constrain the Cardassian Expansion Model and Dark Energy

    Directory of Open Access Journals (Sweden)

    Jun-Jie Wei

    2015-01-01

    Full Text Available We update gamma-ray burst (GRB luminosity relations among certain spectral and light-curve features with 139 GRBs. The distance modulus of 82 GRBs at z>1.4 can be calibrated with the sample at z≤1.4 by using the cubic spline interpolation method from the Union2.1 Type Ia supernovae (SNe Ia set. We investigate the joint constraints on the Cardassian expansion model and dark energy with 580 Union2.1 SNe Ia sample (z<1.4 and 82 calibrated GRBs’ data (1.4model, the best fit is Ωm=0.24-0.15+0.15 and n=0.16-0.52+0.30  (1σ, which is consistent with the ΛCDM cosmology (n=0 in the 1σ confidence region. We also discuss two dark energy models in which the equation of state w(z is parameterized as w(z=w0 and w(z=w0+w1z/(1+z, respectively. Based on our analysis, we see that our universe at higher redshift up to z=8.2 is consistent with the concordance model within 1σ confidence level.

  2. Modeling and Event-Driven Simulation of Coordinated Multi-Point in LTE-Advanced with Constrained Backhaul

    DEFF Research Database (Denmark)

    Artuso, Matteo; Christiansen, Henrik Lehrmann

    2014-01-01

    multi-point joint transmission (CoMP JT). Field tests are generally considered impractical and costly for CoMP JT, therefore the need to provide a comprehensive and high-fidelity computer model to understand the impact of different design attributes and the applicability use cases. This paper presents...

  3. A system identification approach for developing and parameterising an agroforestry system model under constrained availability of data

    NARCIS (Netherlands)

    Keesman, K.J.; Graves, A.; Werf, van der W.; Burgess, P.J.; Palma, J.; Dupraz, C.; Keulen, van H.

    2011-01-01

    This paper introduces a system identification approach to overcome the problem of insufficient data when developing and parameterising an agroforestry system model. Typically, for these complex systems the number of available data points from actual systems is less than the number of parameters in a

  4. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    Science.gov (United States)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  5. Dynamics of Saxothuringian subduction channel/wedge constrained by phase equilibria modelling and micro-fabric analysis

    Czech Academy of Sciences Publication Activity Database

    Collett, S.; Štípská, P.; Kusbach, Vladimír; Schulmann, K.; Marciniak, G.

    2017-01-01

    Roč. 35, č. 3 (2017), s. 253-280 ISSN 0263-4929 Institutional support: RVO:67985530 Keywords : eclogite * Bohemian Massif * thermodynamic modelling * micro-fabric analysis * subduction and exhumation dynamics Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 3.594, year: 2016

  6. A vertically resolved, global, gap-free ozone database for assessing or constraining global climate model simulations

    Directory of Open Access Journals (Sweden)

    G. E. Bodeker

    2013-02-01

    Full Text Available High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa. The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not

  7. New Spectral Model for Constraining Torus Covering Factors from Broadband X-Ray Spectra of Active Galactic Nuclei

    Science.gov (United States)

    Baloković, M.; Brightman, M.; Harrison, F. A.; Comastri, A.; Ricci, C.; Buchner, J.; Gandhi, P.; Farrah, D.; Stern, D.

    2018-02-01

    The basic unified model of active galactic nuclei (AGNs) invokes an anisotropic obscuring structure, usually referred to as a torus, to explain AGN obscuration as an angle-dependent effect. We present a new grid of X-ray spectral templates based on radiative transfer calculations in neutral gas in an approximately toroidal geometry, appropriate for CCD-resolution X-ray spectra (FWHM ≥ 130 eV). Fitting the templates to broadband X-ray spectra of AGNs provides constraints on two important geometrical parameters of the gas distribution around the supermassive black hole: the average column density and the covering factor. Compared to the currently available spectral templates, our model is more flexible, and capable of providing constraints on the main torus parameters in a wider range of AGNs. We demonstrate the application of this model using hard X-ray spectra from NuSTAR (3–79 keV) for four AGNs covering a variety of classifications: 3C 390.3, NGC 2110, IC 5063, and NGC 7582. This small set of examples was chosen to illustrate the range of possible torus configurations, from disk-like to sphere-like geometries with column densities below, as well as above, the Compton-thick threshold. This diversity of torus properties challenges the simple assumption of a standard geometrically and optically thick toroidal structure commonly invoked in the basic form of the unified model of AGNs. Finding broad consistency between our constraints and those from infrared modeling, we discuss how the approach from the X-ray band complements similar measurements of AGN structures at other wavelengths.

  8. Modeling long-term dynamics of electricity markets

    International Nuclear Information System (INIS)

    Olsina, Fernando; Garces, Francisco; Haubrich, H.-J.

    2006-01-01

    In the last decade, many countries have restructured their electricity industries by introducing competition in their power generation sectors. Although some restructuring has been regarded as successful, the short experience accumulated with liberalized power markets does not allow making any founded assertion about their long-term behavior. Long-term prices and long-term supply reliability are now center of interest. This concerns firms considering investments in generation capacity and regulatory authorities interested in assuring the long-term supply adequacy and the stability of power markets. In order to gain significant insight into the long-term behavior of liberalized power markets, in this paper, a simulation model based on system dynamics is proposed and the underlying mathematical formulations extensively discussed. Unlike classical market models based on the assumption that market outcomes replicate the results of a centrally made optimization, the approach presented here focuses on replicating the system structure of power markets and the logic of relationships among system components in order to derive its dynamical response. The simulations suggest that there might be serious problems to adjust early enough the generation capacity necessary to maintain stable reserve margins, and consequently, stable long-term price levels. Because of feedback loops embedded in the structure of power markets and the existence of some time lags, the long-term market development might exhibit a quite volatile behavior. By varying some exogenous inputs, a sensitivity analysis is carried out to assess the influence of these factors on the long-run market dynamics

  9. Data-constrained models of quiet and storm-time geosynchronous magnetic field based on observations in the near geospace

    Science.gov (United States)

    Andreeva, V. A.; Tsyganenko, N. A.

    2017-12-01

    The geosynchronous orbit is unique in that its nightside segment skims along the boundary, separating the inner magnetosphere with a predominantly dipolar configuration from the magnetotail, where the Earth's magnetic field becomes small relative to the contribution from external sources. The ability to accurately reconstruct the magnetospheric configuration at GEO is important to understand the behavior of plasma and energetic particles, which critically affect space weather in the area densely populated by a host of satellites. To that end, we have developed a dynamical empirical model of the geosynchronous magnetic field with forecasting capability, based on a multi-year set of data taken by THEMIS, Polar, Cluster, Geotail, and Van Allen missions. The model's mathematical structure is devised using a new approach [Andreeva and Tsyganenko, 2016, doi:10.1002/2015JA022242], in which the toroidal/poloidal components of the field are represented using the radial and azimuthal basis functions. The model describes the field as a function of solar-magnetic coordinates, geodipole tilt angle, solar wind pressure, and a set of dynamic variables, quantifying the magnetosphere's response to external driving/loading and internal relaxation/dissipation during the disturbance recovery. The response variables are introduced following the approach by Tsyganenko and Sitnov [2005, doi:10.1029/2004JA010798], in which the electric current dynamics was described as a result of competition between the external energy input and the subsequent internal losses of the injected energy. The model's applicability range extends from quiet to moderately disturbed conditions, with peak Sym-H values -150 nT. The obtained results have been validated using independent GOES magnetometer data, taken during the maximum of the 23rd solar cycle and its declining phase.

  10. Short-term forecasting model for aggregated regional hydropower generation

    International Nuclear Information System (INIS)

    Monteiro, Claudio; Ramirez-Rosado, Ignacio J.; Fernandez-Jimenez, L. Alfredo

    2014-01-01

    Highlights: • Original short-term forecasting model for the hourly hydropower generation. • The use of NWP forecasts allows horizons of several days. • New variable to represent the capacity level for generating hydroelectric energy. • The proposed model significantly outperforms the persistence model. - Abstract: This paper presents an original short-term forecasting model of the hourly electric power production for aggregated regional hydropower generation. The inputs of the model are previously recorded values of the aggregated hourly production of hydropower plants and hourly water precipitation forecasts using Numerical Weather Prediction tools, as well as other hourly data (load demand and wind generation). This model is composed of three modules: the first one gives the prediction of the “monthly” hourly power production of the hydropower plants; the second module gives the prediction of hourly power deviation values, which are added to that obtained by the first module to achieve the final forecast of the hourly hydropower generation; the third module allows a periodic adjustment of the prediction of the first module to improve its BIAS error. The model has been applied successfully to the real-life case study of the short-term forecasting of the aggregated hydropower generation in Spain and Portugal (Iberian Peninsula Power System), achieving satisfactory results for the next-day forecasts. The model can be valuable for agents involved in electricity markets and useful for power system operations

  11. Spatial organization of the budding yeast genome in the cell nucleus and identification of specific chromatin interactions from multi-chromosome constrained chromatin model.

    Science.gov (United States)

    Gürsoy, Gamze; Xu, Yun; Liang, Jie

    2017-07-01

    Nuclear landmarks and biochemical factors play important roles in the organization of the yeast genome. The interaction pattern of budding yeast as measured from genome-wide 3C studies are largely recapitulated by model polymer genomes subject to landmark constraints. However, the origin of inter-chromosomal interactions, specific roles of individual landmarks, and the roles of biochemical factors in yeast genome organization remain unclear. Here we describe a multi-chromosome constrained self-avoiding chromatin model (mC-SAC) to gain understanding of the budding yeast genome organization. With significantly improved sampling of genome structures, both intra- and inter-chromosomal interaction patterns from genome-wide 3C studies are accurately captured in our model at higher resolution than previous studies. We show that nuclear confinement is a key determinant of the intra-chromosomal interactions, and centromere tethering is responsible for the inter-chromosomal interactions. In addition, important genomic elements such as fragile sites and tRNA genes are found to be clustered spatially, largely due to centromere tethering. We uncovered previously unknown interactions that were not captured by genome-wide 3C studies, which are found to be enriched with tRNA genes, RNAPIII and TFIIS binding. Moreover, we identified specific high-frequency genome-wide 3C interactions that are unaccounted for by polymer effects under landmark constraints. These interactions are enriched with important genes and likely play biological roles.

  12. Fire emissions constrained by the synergistic use of formaldehyde and glyoxal SCIAMACHY columns in a two-compound inverse modelling framework

    Science.gov (United States)

    Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.

    2008-12-01

    Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.

  13. A facilitated diffusion model constrained by the probability isotherm: a pedagogical exercise in intuitive non-equilibrium thermodynamics.

    Science.gov (United States)

    Chapman, Brian

    2017-06-01

    This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis-Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis- substrate binding, cis → trans bound enzyme shuttling, trans -substrate dissociation and trans → cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive 'tuning' of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force-flux relations, with only a minority of cases having their quasi -linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically 'tuned' to its particular task, dependent on the cis- and trans- substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport.

  14. Constraining Genome-Scale Models to Represent the Bow Tie Structure of Metabolism for 13C Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Tyler W. H. Backman

    2018-01-01

    Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.

  15. Source model for the Copahue volcano magma plumbing system constrained by InSAR surface deformation observations

    OpenAIRE

    Paul Lundgren; M. Nikkhoo; Sergey V. Samsonov; Pietro Milillo; Fernando Gil-Cruz; Jonathan Lazo

    2017-01-01

    Tar files for each of the InSAR time series (interferograms used in the GIAnT time series computation as well as the input files and outputs from using GIAnT). GIAnT is an open source InSAR time series code developed at Caltech. The UAVSAR_*.tgz files contain the interferograms from the UAVSAR airborne system that were used in the analysis. The actual model input files require some additional down sampling using resamptool.m a Matlab code developed by Prof. R. Lohman, Cornell Univ.

  16. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  17. Effect of time-varying tropospheric models on near-regional and regional infrasound propagation as constrained by observational data

    Science.gov (United States)

    McKenna, Mihan H.; Stump, Brian W.; Hayward, Chris

    2008-06-01

    The Chulwon Seismo-Acoustic Array (CHNAR) is a regional seismo-acoustic array with co-located seismometers and infrasound microphones on the Korean peninsula. Data from forty-two days over the course of a year between October 1999 and August 2000 were analyzed; 2052 infrasound-only arrivals and 23 seismo-acoustic arrivals were observed over the six week study period. A majority of the signals occur during local working hours, hour 0 to hour 9 UT and appear to be the result of cultural activity located within a 250 km radius. Atmospheric modeling is presented for four sample days during the study period, one in each of November, February, April, and August. Local meteorological data sampled at six hour intervals is needed to accurately model the observed arrivals and this data produced highly temporally variable thermal ducts that propagated infrasound signals within 250 km, matching the temporal variation in the observed arrivals. These ducts change dramatically on the order of hours, and meteorological data from the appropriate sampled time frame was necessary to interpret the observed arrivals.

  18. Murine model of long-term obstructive jaundice.

    Science.gov (United States)

    Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki

    2016-11-01

    With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of three murine models of obstructive jaundice. C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Overall, 70% (7 of 10) of tCL mice died by day 7, whereas majority 67% (10 of 15) of pCL mice survived with loss of jaundice. A total of 19% (3 of 16) of LMHL mice died; however, jaundice continued beyond day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 d after ligation but jaundice rapidly decreased by day 7. The LHML group developed portal hypertension and severe fibrosis by day 14 in addition to prolonged jaundice. The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice, but long-term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. New approach to study mobility in the vicinity of dynamical arrest; exact application to a kinetically constrained model

    Science.gov (United States)

    DeGregorio, P.; Lawlor, A.; Dawson, K. A.

    2006-04-01

    We introduce a new method to describe systems in the vicinity of dynamical arrest. This involves a map that transforms mobile systems at one length scale to mobile systems at a longer length. This map is capable of capturing the singular behavior accrued across very large length scales, and provides a direct route to the dynamical correlation length and other related quantities. The ideas are immediately applicable in two spatial dimensions, and have been applied to a modified Kob-Andersen type model. For such systems the map may be derived in an exact form, and readily solved numerically. We obtain the asymptotic behavior across the whole physical domain of interest in dynamical arrest.

  20. Correlation-constrained and sparsity-controlled vector autoregressive model for spatio-temporal wind power forecasting

    DEFF Research Database (Denmark)

    Zhao, Yongning; Ye, Lin; Pinson, Pierre

    2018-01-01

    The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity-Controlled Vec......The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity...... matrices in direct manner. However this original SC-VAR is difficult to implement due to its complicated constraints and the lack of guidelines for setting its parameters. To reduce the complexity of this MINLP and to make it possible to incorporate prior expert knowledge to benefit model building...

  1. Theory and practice in sport psychology and motor behaviour needs to be constrained by integrative modelling of brain and behaviour.

    Science.gov (United States)

    Keil, D; Holmes, P; Bennett, S; Davids, K; Smith, N

    2000-06-01

    Because of advances in technology, the non-invasive study of the human brain has enhanced the knowledge base within the neurosciences, resulting in an increased impact on the psychological study of human behaviour. We argue that application of this knowledge base should be considered in theoretical modelling within sport psychology and motor behaviour alongside existing ideas. We propose that interventions founded on current theoretical and empirical understanding in both psychology and the neurosciences may ultimately lead to greater benefits for athletes during practice and performance. As vehicles for exploring the arguments of a greater integration of psychology and neurosciences research, imagery and perception-action within the sport psychology and motor behaviour domains will serve as exemplars. Current neuroscience evidence will be discussed in relation to theoretical developments; the implications for sport scientists will be considered.

  2. Constraining the climate and ocean pH of the early Earth with a geological carbon cycle model

    Science.gov (United States)

    Krissansen-Totton, Joshua; Arney, Giada N.; Catling, David C.

    2018-04-01

    The early Earth’s environment is controversial. Climatic estimates range from hot to glacial, and inferred marine pH spans strongly alkaline to acidic. Better understanding of early climate and ocean chemistry would improve our knowledge of the origin of life and its coevolution with the environment. Here, we use a geological carbon cycle model with ocean chemistry to calculate self-consistent histories of climate and ocean pH. Our carbon cycle model includes an empirically justified temperature and pH dependence of seafloor weathering, allowing the relative importance of continental and seafloor weathering to be evaluated. We find that the Archean climate was likely temperate (0–50 °C) due to the combined negative feedbacks of continental and seafloor weathering. Ocean pH evolves monotonically from 6.6‑0.4+0.6 (2σ) at 4.0 Ga to 7.0‑0.5+0.7 (2σ) at the Archean–Proterozoic boundary, and to 7.9‑0.2+0.1 (2σ) at the Proterozoic–Phanerozoic boundary. This evolution is driven by the secular decline of pCO2, which in turn is a consequence of increasing solar luminosity, but is moderated by carbonate alkalinity delivered from continental and seafloor weathering. Archean seafloor weathering may have been a comparable carbon sink to continental weathering, but is less dominant than previously assumed, and would not have induced global glaciation. We show how these conclusions are robust to a wide range of scenarios for continental growth, internal heat flow evolution and outgassing history, greenhouse gas abundances, and changes in the biotic enhancement of weathering.

  3. Constraining the climate and ocean pH of the early Earth with a geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Arney, Giada N; Catling, David C

    2018-04-17

    The early Earth's environment is controversial. Climatic estimates range from hot to glacial, and inferred marine pH spans strongly alkaline to acidic. Better understanding of early climate and ocean chemistry would improve our knowledge of the origin of life and its coevolution with the environment. Here, we use a geological carbon cycle model with ocean chemistry to calculate self-consistent histories of climate and ocean pH. Our carbon cycle model includes an empirically justified temperature and pH dependence of seafloor weathering, allowing the relative importance of continental and seafloor weathering to be evaluated. We find that the Archean climate was likely temperate (0-50 °C) due to the combined negative feedbacks of continental and seafloor weathering. Ocean pH evolves monotonically from [Formula: see text] (2σ) at 4.0 Ga to [Formula: see text] (2σ) at the Archean-Proterozoic boundary, and to [Formula: see text] (2σ) at the Proterozoic-Phanerozoic boundary. This evolution is driven by the secular decline of pCO 2 , which in turn is a consequence of increasing solar luminosity, but is moderated by carbonate alkalinity delivered from continental and seafloor weathering. Archean seafloor weathering may have been a comparable carbon sink to continental weathering, but is less dominant than previously assumed, and would not have induced global glaciation. We show how these conclusions are robust to a wide range of scenarios for continental growth, internal heat flow evolution and outgassing history, greenhouse gas abundances, and changes in the biotic enhancement of weathering. Copyright © 2018 the Author(s). Published by PNAS.

  4. Modelling the short term herding behaviour of stock markets

    International Nuclear Information System (INIS)

    Shapira, Yoash; Berman, Yonatan; Ben-Jacob, Eshel

    2014-01-01

    Modelling the behaviour of stock markets has been of major interest in the past century. The market can be treated as a network of many investors reacting in accordance to their group behaviour, as manifested by the index and effected by the flow of external information into the system. Here we devise a model that encapsulates the behaviour of stock markets. The model consists of two terms, demonstrating quantitatively the effect of the individual tendency to follow the group and the effect of the individual reaction to the available information. Using the above factors we were able to explain several key features of the stock market: the high correlations between the individual stocks and the index; the Epps effect; the high fluctuating nature of the market, which is similar to real market behaviour. Furthermore, intricate long term phenomena are also described by this model, such as bursts of synchronized average correlation and the dominance of the index as demonstrated through partial correlation. (paper)

  5. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  6. A Long-Term Mathematical Model for Mining Industries

    OpenAIRE

    Achdou , Yves; Giraud , Pierre-Noel; Lasry , Jean-Michel; Lions , Pierre-Louis

    2016-01-01

    International audience; A parcimonious long term model is proposed for a mining industry. Knowing the dynamics of the global reserve, the strategy of each production unit consists of an optimal control problem with two controls, first the flux invested into prospection and the building of new extraction facilities, second the production rate. In turn, the dynamics of the global reserve depends on the individual strategies of the producers, so the models leads to an equilibrium, which is descr...

  7. Multivariate Term Structure Models with Level and Heteroskedasticity Effects

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the ãth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate GA...... and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates. Udgivelsesdato: MAY...

  8. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  9. Constraining Dark Matter Models from a Combined Analysis of Milky Way Satellites with the Fermi Large Area Telescope

    Science.gov (United States)

    Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; hide

    2011-01-01

    Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(exp -26) cm(exp 3) / s at 5 GeV to about 5 X 10(exp -23) cm(exp 3)/ s at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (approx 3 X 10(exp -26) cm(exp 3)/s for a purely s-wave cross section), without assuming additional boost factors.

  10. Constraining Dark Matter Models from a Combined Analysis of Milky Way Satellites with the Fermi Large Area Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M.; Ajello, M.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Albert, A.; /Taiwan, Natl. Taiwan U. /Ohio State U.; Atwood, W.B.; /UC, Santa Cruz; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Barbiellini, G.; /INFN, Trieste /Trieste U.; Bastieri, D.; /INFN, Padua /Padua U.; Bechtol, K.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bellazzini, R.; /INFN, Pisa; Berenji, B.; Blandford, R.D.; Bloom, E.D.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bonamente, E.; /INFN, Perugia /Perugia U.; Borgland, A.W.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bregeon, J.; /INFN, Pisa; Brigida, M.; /Bari Polytechnic /INFN, Bari; Bruel, P.; /Ecole Polytechnique; Buehler, R.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Burnett, T.H.; /Washington U., Seattle; Buson, S.; /INFN, Padua /Padua U. /ICE, Bellaterra /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /INFN, Rome /Rome U. /IASF, Milan /IASF, Milan /DAPNIA, Saclay /INFN, Perugia /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Artep Inc. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /ASDC, Frascati /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Stockholm U. /Stockholm U., OKC /ASDC, Frascati /ASDC, Frascati /Udine U. /INFN, Trieste /Bari Polytechnic /INFN, Bari /Naval Research Lab, Wash., D.C. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Bari Polytechnic /INFN, Bari /Ecole Polytechnique /NASA, Goddard /Hiroshima U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Bari Polytechnic /INFN, Bari /INFN, Bari /ASDC, Frascati /NASA, Goddard /INFN, Perugia /Perugia U. /Bari Polytechnic /INFN, Bari /Bologna Observ. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /DAPNIA, Saclay /Alabama U., Huntsville; /more authors..

    2012-09-14

    Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10{sup -26} cm{sup 3} s{sup -1} at 5 GeV to about 5 x 10{sup -23} cm{sup 3} s{sup -1} at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section ({approx}3 x 10{sup -26} cm{sup 3} s{sup -1} for a purely s-wave cross section), without assuming additional boost factors.

  11. Modelling the Long-term Periglacial Imprint on Mountain Landscapes

    DEFF Research Database (Denmark)

    Andersen, Jane Lund; Egholm, David Lundbek; Knudsen, Mads Faurschou

    Studies of periglacial processes usually focus on small-scale, isolated phenomena, leaving less explored questions of how such processes shape vast areas of Earth’s surface. Here we use numerical surface process modelling to better understand how periglacial processes drive large-scale, long-term...

  12. Viscous cosmological models with a variable cosmological term ...

    African Journals Online (AJOL)

    Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

  13. A shell-model calculation in terms of correlated subsystems

    International Nuclear Information System (INIS)

    Boisson, J.P.; Silvestre-Brac, B.

    1979-01-01

    A method for solving the shell-model equations in terms of a basis which includes correlated subsystems is presented. It is shown that the method allows drastic truncations of the basis to be made. The corresponding calculations are easy to perform and can be carried out rapidly

  14. Risk factors and prognostic models for perinatal asphyxia at term

    NARCIS (Netherlands)

    Ensing, S.

    2015-01-01

    This thesis will focus on the risk factors and prognostic models for adverse perinatal outcome at term, with a special focus on perinatal asphyxia and obstetric interventions during labor to reduce adverse pregnancy outcomes. For the majority of the studies in this thesis we were allowed to use data

  15. The Cluster Lens SDSS 1004+4112: Constraining World Models With its Multiply-Imaged Quasar and Galaxies

    Science.gov (United States)

    Kochanek, C.

    2005-07-01

    We will use deep ACS imaging of the giant {15 arcsec} four-image z_s=1.734 lensed quasar SDSS 1004+4112, and its z_l=0.68 lensing galaxy cluster, to identify many additional multiply-imaged background galaxies. Combining the existing single orbit ACS I-band image with ground based data, we have definitely identified two multiply imaged galaxies with estimated redshifts of 2.6 and 4.3, about 15 probable images of background galaxies, and a point source in the core of the central cD galaxy, which is likely to be the faint, fifth image of the quasar. The new data will provide accurate photometric redshifts, confirm that the candidate fifth image has the same spectral energy distribution as the other quasar images, allow secure identification of additional multiply-lensed galaxies for improving the mass model, and permit identification of faint cluster members. Due to the high lens redshift and the broad redshift distribution of the lensed background sources, we should be able to use the source-redshift scaling of the Einstein radius that depends on {d_ls/d_os}, to derive a direct, geometric estimate of Omega_Lambda. The deeper images will also allow a weak lensing analysis to extend the mass distribution to larger radii. Unlike any other cluster lenses, the time delay between the lensed quasar images {already measured for the A-B images, and measurable for the others over the next few years}, breaks the so-called kappa-degeneracies that complicate weak-lensing analyses.

  16. A model for Long-term Industrial Energy Forecasting (LIEF)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M. [Lawrence Berkeley Lab., CA (United States)]|[Michigan Univ., Ann Arbor, MI (United States). Dept. of Physics]|[Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Hwang, R. [Lawrence Berkeley Lab., CA (United States)

    1992-02-01

    The purpose of this report is to establish the content and structural validity of the Long-term Industrial Energy Forecasting (LIEF) model, and to provide estimates for the model`s parameters. The model is intended to provide decision makers with a relatively simple, yet credible tool to forecast the impacts of policies which affect long-term energy demand in the manufacturing sector. Particular strengths of this model are its relative simplicity which facilitates both ease of use and understanding of results, and the inclusion of relevant causal relationships which provide useful policy handles. The modeling approach of LIEF is intermediate between top-down econometric modeling and bottom-up technology models. It relies on the following simple concept, that trends in aggregate energy demand are dependent upon the factors: (1) trends in total production; (2) sectoral or structural shift, that is, changes in the mix of industrial output from energy-intensive to energy non-intensive sectors; and (3) changes in real energy intensity due to technical change and energy-price effects as measured by the amount of energy used per unit of manufacturing output (KBtu per constant $ of output). The manufacturing sector is first disaggregated according to their historic output growth rates, energy intensities and recycling opportunities. Exogenous, macroeconomic forecasts of individual subsector growth rates and energy prices can then be combined with endogenous forecasts of real energy intensity trends to yield forecasts of overall energy demand. 75 refs.

  17. Constraining the Influence of Natural Variability to Improve Estimates of Global Aerosol Indirect Effects in a Nudged Version of the Community Atmosphere Model 5

    Energy Technology Data Exchange (ETDEWEB)

    Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn

    2012-12-11

    Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.

  18. Constraining Aerosol Optical Models Using Ground-Based, Collocated Particle Size and Mass Measurements in Variable Air Mass Regimes During the 7-SEAS/Dongsha Experiment

    Science.gov (United States)

    Bell, Shaun W.; Hansell, Richard A.; Chow, Judith C.; Tsay, Si-Chee; Wang, Sheng-Hsiang; Ji, Qiang; Li, Can; Watson, John G.; Khlystov, Andrey

    2012-01-01

    During the spring of 2010, NASA Goddard's COMMIT ground-based mobile laboratory was stationed on Dongsha Island off the southwest coast of Taiwan, in preparation for the upcoming 2012 7-SEAS field campaign. The measurement period offered a unique opportunity for conducting detailed investigations of the optical properties of aerosols associated with different air mass regimes including background maritime and those contaminated by anthropogenic air pollution and mineral dust. What appears to be the first time for this region, a shortwave optical closure experiment for both scattering and absorption was attempted over a 12-day period during which aerosols exhibited the most change. Constraints to the optical model included combined SMPS and APS number concentration data for a continuum of fine and coarse-mode particle sizes up to PM2.5. We also take advantage of an IMPROVE chemical sampler to help constrain aerosol composition and mass partitioning of key elemental species including sea-salt, particulate organic matter, soil, non sea-salt sulphate, nitrate, and elemental carbon. Our results demonstrate that the observed aerosol scattering and absorption for these diverse air masses are reasonably captured by the model, where peak aerosol events and transitions between key aerosols types are evident. Signatures of heavy polluted aerosol composed mostly of ammonium and non sea-salt sulphate mixed with some dust with transitions to background sea-salt conditions are apparent in the absorption data, which is particularly reassuring owing to the large variability in the imaginary component of the refractive indices. Extinctive features at significantly smaller time scales than the one-day sample period of IMPROVE are more difficult to reproduce, as this requires further knowledge concerning the source apportionment of major chemical components in the model. Consistency between the measured and modeled optical parameters serves as an important link for advancing remote

  19. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  20. Modeling Wettability Variation during Long-Term Water Flooding

    Directory of Open Access Journals (Sweden)

    Renyi Cao

    2015-01-01

    Full Text Available Surface property of rock affects oil recovery during water flooding. Oil-wet polar substances adsorbed on the surface of the rock will gradually be desorbed during water flooding, and original reservoir wettability will change towards water-wet, and the change will reduce the residual oil saturation and improve the oil displacement efficiency. However there is a lack of an accurate description of wettability alternation model during long-term water flooding and it will lead to difficulties in history match and unreliable forecasts using reservoir simulators. This paper summarizes the mechanism of wettability variation and characterizes the adsorption of polar substance during long-term water flooding from injecting water or aquifer and relates the residual oil saturation and relative permeability to the polar substance adsorbed on clay and pore volumes of flooding water. A mathematical model is presented to simulate the long-term water flooding and the model is validated with experimental results. The simulation results of long-term water flooding are also discussed.

  1. A model for Long-term Industrial Energy Forecasting (LIEF)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M. (Lawrence Berkeley Lab., CA (United States) Michigan Univ., Ann Arbor, MI (United States). Dept. of Physics Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.); Hwang, R. (Lawrence Berkeley Lab., CA (United States))

    1992-02-01

    The purpose of this report is to establish the content and structural validity of the Long-term Industrial Energy Forecasting (LIEF) model, and to provide estimates for the model's parameters. The model is intended to provide decision makers with a relatively simple, yet credible tool to forecast the impacts of policies which affect long-term energy demand in the manufacturing sector. Particular strengths of this model are its relative simplicity which facilitates both ease of use and understanding of results, and the inclusion of relevant causal relationships which provide useful policy handles. The modeling approach of LIEF is intermediate between top-down econometric modeling and bottom-up technology models. It relies on the following simple concept, that trends in aggregate energy demand are dependent upon the factors: (1) trends in total production; (2) sectoral or structural shift, that is, changes in the mix of industrial output from energy-intensive to energy non-intensive sectors; and (3) changes in real energy intensity due to technical change and energy-price effects as measured by the amount of energy used per unit of manufacturing output (KBtu per constant $ of output). The manufacturing sector is first disaggregated according to their historic output growth rates, energy intensities and recycling opportunities. Exogenous, macroeconomic forecasts of individual subsector growth rates and energy prices can then be combined with endogenous forecasts of real energy intensity trends to yield forecasts of overall energy demand. 75 refs.

  2. Potentials of satellite derived SIF products to constrain GPP simulated by the new ORCHIDEE-FluOR terrestrial model at the global scale

    Science.gov (United States)

    Bacour, C.; Maignan, F.; Porcar-Castell, A.; MacBean, N.; Goulas, Y.; Flexas, J.; Guanter, L.; Joiner, J.; Peylin, P.

    2016-12-01

    A new era for improving our knowledge of the terrestrial carbon cycle at the global scale has begun with recent studies on the relationships between remotely sensed Sun Induce Fluorescence (SIF) and plant photosynthetic activity (GPP), and the availability of such satellite-derived products now "routinely" produced from GOSAT, GOME-2, or OCO-2 observations. Assimilating SIF data into terrestrial ecosystem models (TEMs) represents a novel opportunity to reduce the uncertainty of their prediction with respect to carbon-climate feedbacks, in particular the uncertainties resulting from inaccurate parameter values. A prerequisite is a correct representation in TEMs of the several drivers of plant fluorescence from the leaf to the canopy scale, and in particular the competing processes of photochemistry and non photochemical quenching (NPQ).In this study, we present the first results of a global scale assimilation of GOME-2 SIF products within a new version of the ORCHIDEE land surface model including a physical module of plant fluorescence. At the leaf level, the regulation of fluorescence yield is simulated both by the photosynthesis module of ORCHIDEE to calculate the photochemical yield and by a parametric model to estimate NPQ. The latter has been calibrated on leaf fluorescence measurements performed for boreal coniferous and Mediterranean vegetation species. A parametric representation of the SCOPE radiative transfer model is used to model the plant fluorescence fluxes for PSI and PSII and the scaling up to the canopy level. The ORCHIDEE-FluOR model is firstly evaluated with respect to in situ measurements of plant fluorescence flux and photochemical yield for scots pine and wheat. The potentials of SIF data to constrain the modelled GPP are evaluated by assimilating one year of GOME-2-SIF products within ORCHIDEE-FluOR. We investigate in particular the changes in the spatial patterns of GPP following the optimization of the photosynthesis and phenology parameters

  3. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  4. The geometric phase and the Schwinger term in some models

    International Nuclear Information System (INIS)

    Grosse, H.; Langmann, E.

    1991-01-01

    We discuss quantization of fermions interacting with external fields and observe the occurrence of equivalent as well as inequivalent representations of the canonical anticommutation relations. Implementability of gauge and axial gauge transformations leads to generators which fulfill an algebra of charges with Schwinger term. This term can be written as a cocycle and leads to the boson-fermion correspondence. Transport of a quantum mechanical system along a closed loop of parameter space may yield a geometric mechanical system along a closed loop of parameter space may yield a geometric phase. We discuss models for which nonintegrable phase factors are obtained from the adiabatic parallel transport. After second quantization one obtains, in addition, a Schwinger term. Depending on the type of transformation a subtle relationship between these two obstructions can occur. We indicate finally how we may transport density matrices along closed loops in parameter space. (authors)

  5. The Schwinger term and the Berry phase in simple models

    International Nuclear Information System (INIS)

    Grosse, H.

    1989-01-01

    We discuss quantization of fermions interacting with external fields and observe the occurrence of equivalent as well as inequivalent representations of the canonical anticommutation relations. Implementability of gauge and axial gauge transformations leads to generators which fulfill an algebra of charges with Schwinger term. This term can be written as a cocycle and leads to the boson-fermion correspondence. During an adiabatic transport along closed loops in a parameter space we may pick up a nonintegrable phase factor, usually called the Berry phase. We study the occurrence of such a topological phase in a model and give the parallel transport for density matrices. After second quantization one may pick up both a Berry phase and a Schwinger term. 13 refs. (Author)

  6. Intrinsically motivated action-outcome learning and goal-based action recall: a system-level bio-constrained computational model.

    Science.gov (United States)

    Baldassarre, Gianluca; Mannella, Francesco; Fiore, Vincenzo G; Redgrave, Peter; Gurney, Kevin; Mirolli, Marco

    2013-05-01

    Reinforcement (trial-and-error) learning in animals is driven by a multitude of processes. Most animals have evolved several sophisticated systems of 'extrinsic motivations' (EMs) that guide them to acquire behaviours allowing them to maintain their bodies, defend against threat, and reproduce. Animals have also evolved various systems of 'intrinsic motivations' (IMs) that allow them to acquire actions in the absence of extrinsic rewards. These actions are used later to pursue such rewards when they become available. Intrinsic motivations have been studied in Psychology for many decades and their biological substrates are now being elucidated by neuroscientists. In the last two decades, investigators in computational modelling, robotics and machine learning have proposed various mechanisms that capture certain aspects of IMs. However, we still lack models of IMs that attempt to integrate all key aspects of intrinsically motivated learning and behaviour while taking into account the relevant neurobiological constraints. This paper proposes a bio-constrained system-level model that contributes a major step towards this integration. The model focusses on three processes related to IMs and on the neural mechanisms underlying them: (a) the acquisition of action-outcome associations (internal models of the agent-environment interaction) driven by phasic dopamine signals caused by sudden, unexpected changes in the environment; (b) the transient focussing of visual gaze and actions on salient portions of the environment; (c) the subsequent recall of actions to pursue extrinsic rewards based on goal-directed reactivation of the representations of their outcomes. The tests of the model, including a series of selective lesions, show how the focussing processes lead to a faster learning of action-outcome associations, and how these associations can be recruited for accomplishing goal-directed behaviours. The model, together with the background knowledge reviewed in the paper

  7. PSA modeling of long-term accident sequences

    International Nuclear Information System (INIS)

    Georgescu, Gabriel; Corenwinder, Francois; Lanore, Jeanne-Marie

    2014-01-01

    In the context of the extension of PSA scope to include external hazards, in France, both operator (EDF) and IRSN work for the improvement of methods to better take into account in the PSA the accident sequences induced by initiators which affect a whole site containing several nuclear units (reactors, fuel pools,...). These methodological improvements represent an essential prerequisite for the development of external hazards PSA. However, it has to be noted that in French PSA, even before Fukushima, long term accident sequences were taken into account: many insight were therefore used, as complementary information, to enhance the safety level of the plants. IRSN proposed an external events PSA development program. One of the first steps of the program is the development of methods to model in the PSA the long term accident sequences, based on the experience gained. At short term IRSN intends to enhance the modeling of the 'long term' accident sequences induced by the loss of the heat sink or/and the loss of external power supply. The experience gained by IRSN and EDF from the development of several probabilistic studies treating long term accident sequences shows that the simple extension of the mission time of the mitigation systems from 24 hours to longer times is not sufficient to realistically quantify the risk and to obtain a correct ranking of the risk contributions and that treatment of recoveries is also necessary. IRSN intends to develop a generic study which can be used as a general methodology for the assessment of the long term accident sequences, mainly generated by external hazards and their combinations. This first attempt to develop this generic study allowed identifying some aspects, which may be hazard (or combinations of hazards) or related to initial boundary conditions, which should be taken into account for further developments. (authors)

  8. Maximal monotone model with delay term of convolution

    Directory of Open Access Journals (Sweden)

    Claude-Henri Lamarque

    2005-01-01

    Full Text Available Mechanical models are governed either by partial differential equations with boundary conditions and initial conditions (e.g., in the frame of continuum mechanics or by ordinary differential equations (e.g., after discretization via Galerkin procedure or directly from the model description with the initial conditions. In order to study dynamical behavior of mechanical systems with a finite number of degrees of freedom including nonsmooth terms (e.g., friction, we consider here problems governed by differential inclusions. To describe effects of particular constitutive laws, we add a delay term. In contrast to previous papers, we introduce delay via a Volterra kernel. We provide existence and uniqueness results by using an Euler implicit numerical scheme; then convergence with its order is established. A few numerical examples are given.

  9. A Simple Hybrid Model for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Suseelatha Annamareddi

    2013-01-01

    Full Text Available The paper proposes a simple hybrid model to forecast the electrical load data based on the wavelet transform technique and double exponential smoothing. The historical noisy load series data is decomposed into deterministic and fluctuation components using suitable wavelet coefficient thresholds and wavelet reconstruction method. The variation characteristics of the resulting series are analyzed to arrive at reasonable thresholds that yield good denoising results. The constitutive series are then forecasted using appropriate exponential adaptive smoothing models. A case study performed on California energy market data demonstrates that the proposed method can offer high forecasting precision for very short-term forecasts, considering a time horizon of two weeks.

  10. Short-Term Memory and Its Biophysical Model

    Science.gov (United States)

    Wang, Wei; Zhang, Kai; Tang, Xiao-wei

    1996-12-01

    The capacity of short-term memory has been studied using an integrate-and-fire neuronal network model. It is found that the storage of events depend on the manner of the correlation between the events, and the capacity is dominated by the value of after-depolarization potential. There is a monotonic increasing relationship between the value of after-depolarization potential and the memory numbers. The biophysics relevance of the network model is discussed and different kinds of the information processes are studied too.

  11. Modeling flood events for long-term stability

    International Nuclear Information System (INIS)

    Schruben, T.; Portillo, R.

    1985-01-01

    The primary objective for the disposal of uranium mill tailings in the Uranium Mill Tailings Remedial Action (UMTRA) Project is isolation and stabilization to prevent their misuse by man and dispersal by natural forces such as wind, rain, and flood waters (40 CFR-192). Stabilization of sites that are located in or near flood plains presents unique problems in design for long-term performance. This paper discusses the process involved with the selection and hydrologic modeling of the design flood event; and hydraulic modeling with geomorphic considerations of the design flood event. The Gunnison, Colorado, and Riverton, Wyoming, sites will be used as examples in describing the process

  12. D-term Spectroscopy in Realistic Heterotic-String Models

    CERN Document Server

    Dedes, Athanasios

    2000-01-01

    The emergence of free fermionic string models with solely the MSSM charged spectrum below the string scale provides further evidence to the assertion that the true string vacuum is connected to the Z_2 x Z_2 orbifold in the vicinity of the free fermionic point in the Narain moduli space. An important property of the Z_2 x Z_2 orbifold is the cyclic permutation symmetry between the three twisted sectors. If preserved in the three generations models the cyclic permutation symmetry results in a family universal anomalous U(1)_A, which is instrumental in explaining squark degeneracy, provided that the dominant component of supersymmetry breaking arises from the U(1)_A D-term. Interestingly, the contribution of the family--universal D_A-term to the squark masses may be intra-family non-universal, and may differ from the usual (universal) boundary conditions assumed in the MSSM. We contemplate how D_A--term spectroscopy may be instrumental in studying superstring models irrespective of our ignorance of the details ...

  13. The lithospheric-scale 3D structural configuration of the North Alpine Foreland Basin constrained by gravity modelling and the calculation of the 3D load distribution

    Science.gov (United States)

    Przybycin, Anna M.; Scheck-Wenderoth, Magdalena; Schneider, Michael

    2014-05-01

    The North Alpine Foreland Basin is situated in the northern front of the European Alps and extends over parts of France, Switzerland, Germany and Austria. It formed as a wedge shaped depression since the Tertiary in consequence of the Euro - Adriatic continental collision and the Alpine orogeny. The basin is filled with clastic sediments, the Molasse, originating from erosional processes of the Alps and underlain by Mesozoic sedimentary successions and a Paleozoic crystalline crust. For our study we have focused on the German part of the basin. To investigate the deep structure, the isostatic state and the load distribution of this region we have constructed a 3D structural model of the basin and the Alpine area using available depth and thickness maps, regional scale 3D structural models as well as seismic and well data for the sedimentary part. The crust (from the top Paleozoic down to the Moho (Grad et al. 2008)) has been considered as two-parted with a lighter upper crust and a denser lower crust; the partition has been calculated following the approach of isostatic equilibrium of Pratt (1855). By implementing a seismic Lithosphere-Asthenosphere-Boundary (LAB) (Tesauro 2009) the crustal scale model has been extended to the lithospheric-scale. The layer geometry and the assigned bulk densities of this starting model have been constrained by means of 3D gravity modelling (BGI, 2012). Afterwards the 3D load distribution has been calculated using a 3D finite element method. Our results show that the North Alpine Foreland Basin is not isostatically balanced and that the configuration of the crystalline crust strongly controls the gravity field in this area. Furthermore, our results show that the basin area is influenced by varying lateral load differences down to a depth of more than 150 km what allows a first order statement of the required compensating horizontal stress needed to prevent gravitational collapse of the system. BGI (2012). The International

  14. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  15. Model for low temperature oxidation during long term interim storage

    Energy Technology Data Exchange (ETDEWEB)

    Desgranges, Clara; Bertrand, Nathalie; Gauvain, Danielle; Terlain, Anne [Service de la Corrosion et du Comportement des Materiaux dans leur Environnement, CEA/Saclay - 91191 Gif-sur-Yvette Cedex (France); Poquillon, Dominique; Monceau, Daniel [CIRIMAT UMR 5085, ENSIACET-INPT, 31077 Toulouse Cedex 4 (France)

    2004-07-01

    For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode during about one century. The metal lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies performed in the frame of the COCON program. An advanced model based on the description of elementary mechanisms involved in scale growth at low temperatures, like partial interfacial control of the oxidation kinetics and/or grain boundary diffusion, is developed in order to increase the reliability of the long term extrapolations deduced from basic models developed from short time experiments. Since only few experimental data on dry oxidation are available in the temperature range of interest, experiments have also been performed to evaluate the relevant input parameters for models like grain size of oxide scale, considering iron as simplified material. (authors)

  16. Model for low temperature oxidation during long term interim storage

    International Nuclear Information System (INIS)

    Desgranges, Clara; Bertrand, Nathalie; Gauvain, Danielle; Terlain, Anne; Poquillon, Dominique; Monceau, Daniel

    2004-01-01

    For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode during about one century. The metal lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies performed in the frame of the COCON program. An advanced model based on the description of elementary mechanisms involved in scale growth at low temperatures, like partial interfacial control of the oxidation kinetics and/or grain boundary diffusion, is developed in order to increase the reliability of the long term extrapolations deduced from basic models developed from short time experiments. Since only few experimental data on dry oxidation are available in the temperature range of interest, experiments have also been performed to evaluate the relevant input parameters for models like grain size of oxide scale, considering iron as simplified material. (authors)

  17. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  18. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  19. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.

    2012-01-01

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  20. Nonlinear Kalman Filtering in Affine Term Structure Models

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Dorion, Christian; Jacobs, Kris

    When the relationship between security prices and state variables in dynamic term structure models is nonlinear, existing studies usually linearize this relationship because nonlinear fi…ltering is computationally demanding. We conduct an extensive investigation of this linearization and analyze...... the potential of the unscented Kalman …filter to properly capture nonlinearities. To illustrate the advantages of the unscented Kalman …filter, we analyze the cross section of swap rates, which are relatively simple non-linear instruments, and cap prices, which are highly nonlinear in the states. An extensive...

  1. Source term identification in atmospheric modelling via sparse optimization

    Science.gov (United States)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  2. Could a secular increase in organic burial explain the rise of oxygen? Insights from a geological carbon cycle model constrained by the carbon isotope record

    Science.gov (United States)

    Krissansen-Totton, J.; Kipp, M.; Catling, D. C.

    2017-12-01

    The stable isotopes of carbon in marine sedimentary rock provide a window into the evolution of the Earth system. Conventionally, a relatively constant carbon isotope ratio in marine sedimentary rocks has been interpreted as implying constant organic carbon burial relative to total carbon burial. Because organic carbon burial corresponds to net oxygen production from photosynthesis, it follows that secular changes in the oxygen source flux cannot explain the dramatic rise of oxygen over Earth history. Instead, secular declines in oxygen sink fluxes are often invoked as causes for the rise of oxygen. However, constant fractional organic burial is difficult to reconcile with tentative evidence for low phosphate concentrations in the Archean ocean, which would imply lower marine productivity and—all else being equal—less organic carbon burial than today. The conventional interpretation of the carbon isotope record rests on the untested assumption that the isotopic ratio of carbon inputs into the ocean reflect mantle isotopic values throughout Earth history. In practice, differing rates of carbonate and organic weathering will allow for changes in isotopic inputs, as suggested by [1] and [2]. However, these inputs can not vary freely because large changes in isotopic inputs would induce secular trends in carbon reservoirs, which are not observed in the isotope record. We apply a geological carbon cycle model to all Earth history, tracking carbon isotopes in crustal, mantle, and ocean reservoirs. Our model is constrained by the carbon isotope record such that we can determine the extent to which large changes in organic burial are permitted. We find both constant organic burial and 3-5 fold increases in organic burial since 4.0 Ga can be reconciled with the carbon isotope record. Changes in the oxygen source flux thus need to be reconsidered as a possible contributor to Earth's oxygenation. [1] L. A. Derry, Organic carbon cycling and the lithosphere, in Treatise on

  3. Modelling of long term nitrogen retention in surface waters

    Science.gov (United States)

    Halbfaß, S.; Gebel, M.; Bürger, S.

    2010-12-01

    In order to derive measures to reduce nutrient loadings into waters in Saxony, we calculated nitrogen inputs with the model STOFFBILANZ on the regional scale. Thereby we have to compare our modelling results to measured loadings at the river basin outlets, considering long term nutrient retention in surface waters. The most important mechanism of nitrogen retention is the denitrification in the contact zone of water and sediment, being controlled by hydraulic and micro-biological processes. Retention capacity is derived on the basis of the nutrient spiralling concept, using water residence time (hydraulic aspect) and time-specific N-uptake by microorganisms (biological aspect). Short time related processes of mobilization and immobilization are neglected, because they are of minor importance for the derivation of measures on the regional scale.

  4. Low-level radioactive waste performance assessments: Source term modeling

    International Nuclear Information System (INIS)

    Icenhour, A.S.; Godbee, H.W.; Miller, L.F.

    1995-01-01

    Low-level radioactive wastes (LLW) generated by government and commercial operations need to be isolated from the environment for at least 300 to 500 yr. Most existing sites for the storage or disposal of LLW employ the shallow-land burial approach. However, the U.S. Department of Energy currently emphasizes the use of engineered systems (e.g., packaging, concrete and metal barriers, and water collection systems). Future commercial LLW disposal sites may include such systems to mitigate radionuclide transport through the biosphere. Performance assessments must be conducted for LUW disposal facilities. These studies include comprehensive evaluations of radionuclide migration from the waste package, through the vadose zone, and within the water table. Atmospheric transport mechanisms are also studied. Figure I illustrates the performance assessment process. Estimates of the release of radionuclides from the waste packages (i.e., source terms) are used for subsequent hydrogeologic calculations required by a performance assessment. Computer models are typically used to describe the complex interactions of water with LLW and to determine the transport of radionuclides. Several commonly used computer programs for evaluating source terms include GWSCREEN, BLT (Breach-Leach-Transport), DUST (Disposal Unit Source Term), BARRIER (Ref. 5), as well as SOURCE1 and SOURCE2 (which are used in this study). The SOURCE1 and SOURCE2 codes were prepared by Rogers and Associates Engineering Corporation for the Oak Ridge National Laboratory (ORNL). SOURCE1 is designed for tumulus-type facilities, and SOURCE2 is tailored for silo, well-in-silo, and trench-type disposal facilities. This paper focuses on the source term for ORNL disposal facilities, and it describes improved computational methods for determining radionuclide transport from waste packages

  5. A new Expert Finding model based on Term Correlation Matrix

    Directory of Open Access Journals (Sweden)

    Ehsan Pornour

    2015-09-01

    Full Text Available Due to the enormous volume of unstructured information available on the Web and inside organization, finding an answer to the knowledge need in a short time is difficult. For this reason, beside Search Engines which don’t consider users individual characteristics, Recommender systems were created which use user’s previous activities and other individual characteristics to help users find needed knowledge. Recommender systems usage is increasing every day. Expert finder systems also by introducing expert people instead of recommending information to users have provided this facility for users to ask their questions form experts. Having relation with experts not only causes information transition, but also with transferring experiences and inception causes knowledge transition. In this paper we used university professors academic resume as expert people profile and then proposed a new expert finding model that recommends experts to users query. We used Term Correlation Matrix, Vector Space Model and PageRank algorithm and proposed a new hybrid model which outperforms conventional methods. This model can be used in internet environment, organizations and universities that experts have resume dataset.

  6. A Long-Term Mathematical Model for Mining Industries

    International Nuclear Information System (INIS)

    Achdou, Yves; Giraud, Pierre-Noel; Lasry, Jean-Michel; Lions, Pierre-Louis

    2016-01-01

    A parcimonious long term model is proposed for a mining industry. Knowing the dynamics of the global reserve, the strategy of each production unit consists of an optimal control problem with two controls, first the flux invested into prospection and the building of new extraction facilities, second the production rate. In turn, the dynamics of the global reserve depends on the individual strategies of the producers, so the models leads to an equilibrium, which is described by low dimensional systems of partial differential equations. The dimensionality depends on the number of technologies that a mining producer can choose. In some cases, the systems may be reduced to a Hamilton–Jacobi equation which is degenerate at the boundary and whose right hand side may blow up at the boundary. A mathematical analysis is supplied. Then numerical simulations for models with one or two technologies are described. In particular, a numerical calibration of the model in order to fit the historical data is carried out.

  7. A Long-Term Mathematical Model for Mining Industries

    Energy Technology Data Exchange (ETDEWEB)

    Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr [Univ. Paris Diderot, Sorbonne Paris Cité, Laboratoire Jacques-Louis Lions, UMR 7598, UPMC, CNRS (France); Giraud, Pierre-Noel [CERNA, Mines ParisTech (France); Lasry, Jean-Michel [Univ. Paris Dauphine (France); Lions, Pierre-Louis [Collège de France (France)

    2016-12-15

    A parcimonious long term model is proposed for a mining industry. Knowing the dynamics of the global reserve, the strategy of each production unit consists of an optimal control problem with two controls, first the flux invested into prospection and the building of new extraction facilities, second the production rate. In turn, the dynamics of the global reserve depends on the individual strategies of the producers, so the models leads to an equilibrium, which is described by low dimensional systems of partial differential equations. The dimensionality depends on the number of technologies that a mining producer can choose. In some cases, the systems may be reduced to a Hamilton–Jacobi equation which is degenerate at the boundary and whose right hand side may blow up at the boundary. A mathematical analysis is supplied. Then numerical simulations for models with one or two technologies are described. In particular, a numerical calibration of the model in order to fit the historical data is carried out.

  8. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J.

    2010-01-01

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  9. Constraining magma physical properties and its temporal evolution from InSAR and topographic data only: a physics-based eruption model for the effusive phase of the Cordon Caulle 2011-2012 rhyodacitic eruption

    Science.gov (United States)

    Delgado, F.; Kubanek, J.; Anderson, K. R.; Lundgren, P.; Pritchard, M. E.

    2017-12-01

    The 2011-2012 eruption of Cordón Caulle volcano in Chile is the best scientifically observed rhyodacitic eruption and is thus a key place to understand the dynamics of these rare but powerful explosive rhyodacitic eruptions. Because the volatile phase controls both the eruption temporal evolution and the eruptive style, either explosive or effusive, it is important to constrain the physical parameters that drive these eruptions. The eruption began explosively and after two weeks evolved into a hybrid explosive - lava flow effusion whose volume-time evolution we constrain with a series of TanDEM-X Digital Elevation Models. Our data shows the intrusion of a large volume laccolith or cryptodome during the first 2.5 months of the eruption and lava flow effusion only afterwards, with a total volume of 1.4 km3. InSAR data from the ENVISAT and TerraSAR-X missions shows more than 2 m of subsidence during the effusive eruption phase produced by deflation of a finite spheroidal source at a depth of 5 km. In order to constrain the magma total H2O content, crystal cargo, and reservoir pressure drop we numerically solve the coupled set of equations of a pressurized magma reservoir, magma conduit flow and time dependent density, volatile exsolution and viscosity that we use to invert the InSAR and topographic data time series. We compare the best-fit model parameters with independent estimates of magma viscosity and total gas content measured from lava samples. Preliminary modeling shows that although it is not possible to model both the InSAR and the topographic data during the onset of the laccolith emplacement, it is possible to constrain the magma H2O and crystal content, to 4% wt and 30% which agree well with published literature values.

  10. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40)

    Science.gov (United States)

    Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol

  11. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2017-06-01

    Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1

  12. A less-constrained (2,0) super-Yang-Mills model: the coupling to non-linear σ-models

    International Nuclear Information System (INIS)

    Almeida, C.A.S.; Doria, R.M.

    1990-01-01

    Considering a class of (2,0) super-Yang-Mills multiplets characterised by the appearance of a pair of independent gauge potentials, we present here their coupling to non-linear σ-models in (2,0)-superspace. Contrary to the case of the coupling to (2,0) matter superfields, the extra gauge potential present in the Yang-Mills sector does not decouple from the theory in the case one gauges isometry groups of σ-models. (author)

  13. Modeling of long-term energy system of Japan

    International Nuclear Information System (INIS)

    Gotoh, Yoshitaka; Sato, Osamu; Tadokoro, Yoshihiro

    1999-07-01

    In order to analyze the future potential of reducing carbon dioxide emissions, the long-term energy system of Japan was modeled following the framework of the MARKAL model, and the database of energy technology characteristics was developed. First, a reference energy system was built by incorporating all important energy sources and technologies that will be available until the year 2050. This system consists of 25 primary energy sources, 33 technologies for electric power generation and/or low temperature heat production, 97 technologies for energy transformation, storage, and distribution, and 170 end-use technologies. Second, the database was developed for the characteristics of individual technologies in the system. The characteristic data consists of input and output of energy carriers, efficiency, availability, lifetime, investment cost, operation and maintenance cost, CO 2 emission coefficient, and others. Since a large number of technologies are included in the system, this report focuses modeling of a supply side, and involves the database of energy technologies other than for end-use purposes. (author)

  14. The IEA Model of Short-term Energy Security

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    Ensuring energy security has been at the centre of the IEA mission since its inception, following the oil crises of the early 1970s. While the security of oil supplies remains important, contemporary energy security policies must address all energy sources and cover a comprehensive range of natural, economic and political risks that affect energy sources, infrastructures and services. In response to this challenge, the IEA is currently developing a Model Of Short-term Energy Security (MOSES) to evaluate the energy security risks and resilience capacities of its member countries. The current version of MOSES covers short-term security of supply for primary energy sources and secondary fuels among IEA countries. It also lays the foundation for analysis of vulnerabilities of electricity and end-use energy sectors. MOSES contains a novel approach to analysing energy security, which can be used to identify energy security priorities, as a starting point for national energy security assessments and to track the evolution of a country's energy security profile. By grouping together countries with similar 'energy security profiles', MOSES depicts the energy security landscape of IEA countries. By extending the MOSES methodology to electricity security and energy services in the future, the IEA aims to develop a comprehensive policy-relevant perspective on global energy security. This Working Paper is intended for readers who wish to explore the MOSES methodology in depth; there is also a brochure which provides an overview of the analysis and results.

  15. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  16. Modelling the long-term vertical dynamics of salt marshes

    Science.gov (United States)

    Zoccarato, Claudia; Teatini, Pietro

    2017-04-01

    Salt marshes are vulnerable environments hosting complex interactions between physical and biological processes with a strong influence on the dynamics of the marsh evolution. The estimation and prediction of the elevation of a salt-marsh platform is crucial to forecast the marsh growth or regression under different scenarios considering, for example, the potential climate changes. The long-term vertical dynamics of a salt marsh is predicted with the aid of an original finite-element (FE) numerical model accounting for the marsh accretion and compaction and for the variation rates of the relative sea level rise, i.e., land subsidence of the marsh basement and eustatic rise of the sea level. The accretion term considers the vertical sedimentation of organic and inorganic material over the marsh surface, whereas the compaction reflects the progressive consolidation of the porous medium under the increasing load of the overlying younger deposits. The modelling approach is based on a 2D groundwater flow simulator, which provides the pressure evolution within a compacting/accreting vertical cross-section of the marsh assuming that the groundwater flow obeys the relative Darcy's law, coupled to a 1D vertical geomechanical module following Terzaghi's principle of effective intergranular stress. Soil porosity, permeability, and compressibility may vary with the effective intergranular stress according to empirically based relationships. The model also takes into account the geometric non-linearity arising from the consideration of large solid grain movements by using a Lagrangian approach with an adaptive FE mesh. The element geometry changes in time to follow the deposit consolidation and the element number increases in time to follow the sedimentation of new material. The numerical model is tested on different realistic configurations considering the influence of (i) the spatial distribution of the sedimentation rate in relation to the distance from the marsh margin, (ii

  17. Using isotopes to constrain water flux and age estimates in snow-influenced catchments using the STARR (Spatially distributed Tracer-Aided Rainfall–Runoff model

    Directory of Open Access Journals (Sweden)

    P. Ala-aho

    2017-10-01

    Full Text Available Tracer-aided hydrological models are increasingly used to reveal fundamentals of runoff generation processes and water travel times in catchments. Modelling studies integrating stable water isotopes as tracers are mostly based in temperate and warm climates, leaving catchments with strong snow influences underrepresented in the literature. Such catchments are challenging, as the isotopic tracer signals in water entering the catchments as snowmelt are typically distorted from incoming precipitation due to fractionation processes in seasonal snowpack. We used the Spatially distributed Tracer-Aided Rainfall–Runoff (STARR model to simulate fluxes, storage, and mixing of water and tracers, as well as estimating water ages in three long-term experimental catchments with varying degrees of snow influence and contrasting landscape characteristics. In the context of northern catchments the sites have exceptionally long and rich data sets of hydrometric data and – most importantly – stable water isotopes for both rain and snow conditions. To adapt the STARR model for sites with strong snow influence, we used a novel parsimonious calculation scheme that takes into account the isotopic fractionation through snow sublimation and snowmelt. The modified STARR setup simulated the streamflows, isotope ratios, and snow pack dynamics quite well in all three catchments. From this, our simulations indicated contrasting median water ages and water age distributions between catchments brought about mainly by differences in topography and soil characteristics. However, the variable degree of snow influence in catchments also had a major influence on the stream hydrograph, storage dynamics, and water age distributions, which was captured by the model. Our study suggested that snow sublimation fractionation processes can be important to include in tracer-aided modelling for catchments with seasonal snowpack, while the influence of fractionation during snowmelt

  18. REMI and ROUSE: Quantitative Models for Long-Term and Short-Term Priming in Perceptual Identification

    NARCIS (Netherlands)

    E.J. Wagenmakers (Eric-Jan); R. Zeelenberg (René); D.E. Huber (David); J.G.W. Raaijmakers (Jeroen)

    2003-01-01

    textabstractThe REM model originally developed for recognition memory (Shiffrin & Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. We discuss two REM models based on Bayesian principles: a model for long-term priming (REMI;

  19. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model – Part 1: Assessing the influence of constrained multi-generational ageing

    Directory of Open Access Journals (Sweden)

    S. H. Jathar

    2016-02-01

    Full Text Available Multi-generational oxidation of volatile organic compound (VOC oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1 consider only functionalization reactions but do not consider fragmentation reactions, (2 have not been constrained to experimental data and (3 are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM of Cappa and Wilson (2012, constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under

  20. Modelling substorm chorus events in terms of dispersive azimuthal drift

    Directory of Open Access Journals (Sweden)

    A. B. Collier

    2004-12-01

    Full Text Available The Substorm Chorus Event (SCE is a radio phenomenon observed on the ground after the onset of the substorm expansion phase. It consists of a band of VLF chorus with rising upper and lower cutoff frequencies. These emissions are thought to result from Doppler-shifted cyclotron resonance between whistler mode waves and energetic electrons which drift into a ground station's field of view from an injection site around midnight. The increasing frequency of the emission envelope has been attributed to the combined effects of energy dispersion due to gradient and curvature drifts, and the modification of resonance conditions and variation of the half-gyrofrequency cutoff resulting from the radial component of the ExB drift.

    A model is presented which accounts for the observed features of the SCE in terms of the growth rate of whistler mode waves due to anisotropy in the electron distribution. This model provides an explanation for the increasing frequency of the SCE lower cutoff, as well as reproducing the general frequency-time signature of the event. In addition, the results place some restrictions on the injected particle source distribution which might lead to a SCE.

    Key words. Space plasma physics (Wave-particle interaction – Magnetospheric physics (Plasma waves and instabilities; Storms and substorms

  1. Modelling substorm chorus events in terms of dispersive azimuthal drift

    Directory of Open Access Journals (Sweden)

    A. B. Collier

    2004-12-01

    Full Text Available The Substorm Chorus Event (SCE is a radio phenomenon observed on the ground after the onset of the substorm expansion phase. It consists of a band of VLF chorus with rising upper and lower cutoff frequencies. These emissions are thought to result from Doppler-shifted cyclotron resonance between whistler mode waves and energetic electrons which drift into a ground station's field of view from an injection site around midnight. The increasing frequency of the emission envelope has been attributed to the combined effects of energy dispersion due to gradient and curvature drifts, and the modification of resonance conditions and variation of the half-gyrofrequency cutoff resulting from the radial component of the ExB drift. A model is presented which accounts for the observed features of the SCE in terms of the growth rate of whistler mode waves due to anisotropy in the electron distribution. This model provides an explanation for the increasing frequency of the SCE lower cutoff, as well as reproducing the general frequency-time signature of the event. In addition, the results place some restrictions on the injected particle source distribution which might lead to a SCE. Key words. Space plasma physics (Wave-particle interaction – Magnetospheric physics (Plasma waves and instabilities; Storms and substorms

  2. Short-Term Power Plant GHG Emissions Forecasting Model

    International Nuclear Information System (INIS)

    Vidovic, D.

    2016-01-01

    In 2010, the share of greenhouse gas (GHG) emissions from power generation in the total emissions at the global level was about 25 percent. From January 1st, 2013 Croatian facilities have been involved in the European Union Emissions Trading System (EU ETS). The share of the ETS sector in total GHG emissions in Croatia in 2012 was about 30 percent, where power plants and heat generation facilities contributed to almost 50 percent. Since 2013 power plants are obliged to purchase all emission allowances. The paper describes the short-term climate forecasting model of greenhouse gas emissions from power plants while covering the daily load diagram of the system. Forecasting is done on an hourly domain typically for one day, it is possible and more days ahead. Forecasting GHG emissions in this way would enable power plant operators to purchase additional or sell surplus allowances on the market at the time. Example that describes the operation of the above mentioned forecasting model is given at the end of the paper.(author).

  3. Model and economic uncertainties in balancing short-term and long-term objectives in water-flooding optimization.

    NARCIS (Netherlands)

    Siraj, M.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2015-01-01

    Model-based optimization of oil production has a significant scope to increase ultimate recovery or financial life-cycle performance. The Net Present Value (NPV) objective in such an optimization framework, because of its nature, focuses on the long-term gains while the short-term production is not

  4. Constraints on the affinity term for modeling long-term glass dissolution rates

    International Nuclear Information System (INIS)

    Bourcier, W.L.; Carroll, S.A.; Phillips, B.L.

    1993-11-01

    Predictions of long-term glass dissolution rates are highly dependent on the form of the affinity term in the rate expression. Analysis of the quantitative effect of saturation state on glass dissolution rate for CSG glass (a simple analog of SRL-165 glass), shows that a simple (1-Q/K) affinity term does not match experimental results. Our data at 100 degree C show that the data is better fit by an affinity term having the form (1 - (Q/K) 1 /σ) where σ = 10

  5. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  6. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

    2017-11-01

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

  8. Considering extraction constraints in long-term oil price modelling

    Energy Technology Data Exchange (ETDEWEB)

    Rehrl, Tobias; Friedrich, Rainer; Voss, Alfred

    2005-12-15

    Apart from divergence about the remaining global oil resources, the peak oil discussion can be reduced to a dispute about the time rate at which these resources can be supplied. On the one hand it is problematic to project oil supply trends without taking both - prices as well as supply costs - explicitly into account. On the other hand are supply cost estimates however itself heavily dependent on the underlying extraction rates and are actually only valid within a certain business-as-usual extraction rate scenario (which itself is the task to determine). In fact, even after having applied enhanced recovery technologies, the rate at which an oil field can be exploited is quite restricted. Above a certain level an additional extraction rate increase can only be costly achieved at risks of losses in the overall recoverable amounts of the oil reservoir and causes much higher marginal cost. This inflexibility in extraction can be overcome in principle by the access to new oil fields. This indicates why the discovery trend may roughly form the long-term oil production curve, at least for price-taking suppliers. The long term oil discovery trend itself can be described as a logistic process with the two opposed effects of learning and depletion. This leads to the well-known Hubbert curve. Several attempts have been made to incorporate economic variables econometrically into the Hubbert model. With this work we follow a somewhat inverse approach and integrate Hubbert curves in our Long-term Oil Price and EXtraction model LOPEX. In LOPEX we assume that non-OPEC oil production - as long as the oil can be profitably discovered and extracted - is restricted to follow self-regulative discovery trends described by Hubbert curves. Non-OPEC production in LOPEX therefore consists of those Hubbert cycles that are profitable, depending on supply cost and price. Endogenous and exogenous technical progress is extra integrated in different ways. LOPEX determines extraction and price

  9. Considering extraction constraints in long-term oil price modelling

    International Nuclear Information System (INIS)

    Rehrl, Tobias; Friedrich, Rainer; Voss, Alfred

    2005-01-01

    Apart from divergence about the remaining global oil resources, the peak oil discussion can be reduced to a dispute about the time rate at which these resources can be supplied. On the one hand it is problematic to project oil supply trends without taking both - prices as well as supply costs - explicitly into account. On the other hand are supply cost estimates however itself heavily dependent on the underlying extraction rates and are actually only valid within a certain business-as-usual extraction rate scenario (which itself is the task to determine). In fact, even after having applied enhanced recovery technologies, the rate at which an oil field can be exploited is quite restricted. Above a certain level an additional extraction rate increase can only be costly achieved at risks of losses in the overall recoverable amounts of the oil reservoir and causes much higher marginal cost. This inflexibility in extraction can be overcome in principle by the access to new oil fields. This indicates why the discovery trend may roughly form the long-term oil production curve, at least for price-taking suppliers. The long term oil discovery trend itself can be described as a logistic process with the two opposed effects of learning and depletion. This leads to the well-known Hubbert curve. Several attempts have been made to incorporate economic variables econometrically into the Hubbert model. With this work we follow a somewhat inverse approach and integrate Hubbert curves in our Long-term Oil Price and EXtraction model LOPEX. In LOPEX we assume that non-OPEC oil production - as long as the oil can be profitably discovered and extracted - is restricted to follow self-regulative discovery trends described by Hubbert curves. Non-OPEC production in LOPEX therefore consists of those Hubbert cycles that are profitable, depending on supply cost and price. Endogenous and exogenous technical progress is extra integrated in different ways. LOPEX determines extraction and price

  10. Coherent and incoherent giant dipole resonance γ-ray emission induced by heavy ion collisions: Study of the 40Ca+48Ca system by means of the constrained molecular dynamics model

    International Nuclear Information System (INIS)

    Papa, Massimo; Cardella, Giuseppe; Bonanno, Antonio; Pappalardo, Giuseppe; Rizzo, Francesca; Amorini, Francesca; Bonasera, Aldo; Di Pietro, Alessia; Figuera, Pier Paolo; Tudisco, Salvatore; Maruyama, Toshiki

    2003-01-01

    Coherent and incoherent dipolar γ-ray emission is studied in a fully dynamical approach by means of the constrained molecular dynamics model. The study is focused on the system 40 Ca+ 48 Ca for which recently experimental data have been collected at 25 MeV/nucleon. The approach allows us to explain the experimental results in a self-consistent way without using statistical or hybrid models. Moreover, calculations performed at higher energy show interesting correlations between the fragment formation process, the degree of collectivity, and the coherence degree of the γ-ray emission process

  11. Long-term durum wheat monoculture: modelling and future projection

    Directory of Open Access Journals (Sweden)

    Ettore Bernardoni

    2012-03-01

    Full Text Available The potential effects of future climate change on grain production of a winter durum wheat cropping system were investigated. Based on future climate change projections, derived from a statistical downscaling process applied to the HadCM3 general circulation model and referred to two IPCC scenarios (A2 and B1, the response on yield and aboveground biomass (AGB and the variation in total organic carbon (TOC were explored. The software used in this work is an hybrid dynamic simulation model able to simulate, under different pedoclimatic conditions, the processes involved in cropping system such as crop growth and development, water and nitrogen balance. It implements different approaches in order to ensure accurate simulation of the mainprocess related to soil-crop-atmosphere continuum.The model was calibrated using soil data, crop yield, AGB and phenology coming from a long-term experiment, located in Apulia region. The calibration was performed using data collected in the period 1978–1990; validation was carried out on the 1991–2009 data. Phenology simulation was sufficiently accurate, showing some limitation only in predicting the physiological maturity. Yields and AGBs were predicted with an acceptable accuracy during both calibration and validation. CRM resulted always close to optimum value, EF in every case scored positive value, the value of index r2 was good, although in some cases values lower than 0.6 were calculated. Slope of the linear regression equation between measured and simulated values was always close to 1, indicating an overall good performance of the model. Both future climate scenarios led to a general increase in yields but a slightly decrease in AGB values. Data showed variations in the total production and yield among the different periods due to the climate variation. TOC evolution suggests that the combination of temperature and precipitation is the main factor affecting TOC variation under future scenarios

  12. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  13. Constraining CO2 tower measurements in an inhomogeneous area with anthropogenic emissions using a combination of car-mounted instrument campaigns, aircraft profiles, transport modeling and neural networks

    Science.gov (United States)

    Schmidt, A.; Rella, C.; Conley, S. A.; Goeckede, M.; Law, B. E.

    2013-12-01

    The NOAA CO2 observation network in Oregon has been enhanced by 3 new towers in 2012. The tallest tower in the network (270 m), located in Silverton in the Willamette Valley is affected by anthropogenic emissions from Oregon's busiest traffic routes and urban centers. In summer 2012, we conducted a measurement campaign using a car-mounted PICARRO CRDS CO2/CO analyzer. Over 3 days, the instrument was driven over 1000 miles throughout the northwestern portion of Oregon measuring the CO/ CO2 ratios on main highways, back roads in forests, agricultural sites, and Oregon's biggest urban centers. By geospatial analyses we obtained ratios of CO/ CO2 over distinct land cover types divided into 10 classes represented in the study area. Using the coupled WRF-STILT transport model we calculated the footprints of nearby CO/ CO2 observation towers for the corresponding days of mobile road measurements. Spatiotemporally assigned source areas in combination with the land use classification were then used to calculate specific ratios of CO (anthropogenic origins) and CO2 to separate the anthropogenic portion of CO2 from the mixing ratio time series measured at the tower in Silverton. The WRF modeled boundary layer heights used in out study showed some differences compared to the boundary layer heights derived from profile data of wind, temperature, and humidity measured with an airplane in August, September, and November 2012, repeatedly over 5 tower locations. A Bayesian Regularized Artificial Neural Network (BRANN) was used to correct the boundary layer height calculated with WRF with a temporal resolution of 20 minutes and a horizontal resolution of 4 km. For that purpose the BRANN was trained using height profile data from the flight campaigns and spatiotemporally corresponding meteorological data from WRF. Our analyses provide information needed to run inverse modeling of CO2 exchange in an area that is affected by sources that cannot easily be considered by biospheric models

  14. Creating a Long-Term Diabetic Rabbit Model

    Directory of Open Access Journals (Sweden)

    Jianpu Wang

    2010-01-01

    Full Text Available This study was to create a long-term rabbit model of diabetes mellitus for medical studies of up to one year or longer and to evaluate the effects of chronic hyperglycemia on damage of major organs. A single dose of alloxan monohydrate (100 mg/kg was given intravenously to 20 young New Zealand White rabbits. Another 12 age-matched normal rabbits were used as controls. Hyperglycemia developed within 48 hours after treatment with alloxan. Insulin was given daily after diabetes developed. All animals gained some body weight, but the gain was much less than the age-matched nondiabetic rabbits. Hyperlipidemia, higher blood urea nitrogen and creatinine were found in the diabetic animals. Histologically, the pancreas showed marked beta cell damage. The kidneys showed significantly thickened afferent glomerular arterioles with narrowed lumens along with glomerular atrophy. Lipid accumulation in the cytoplasm of hepatocytes appeared as vacuoles. Full-thickness skin wound healing was delayed. In summary, with careful management, alloxan-induced diabetic rabbits can be maintained for one year or longer in reasonably good health for diabetic studies.

  15. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

    2017-11-03

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

  16. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  17. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  18. Marine and Coastal Morphology: medium term and long-term area modelling

    DEFF Research Database (Denmark)

    Kristensen, Sten Esbjørn

    This thesis documents development and application of a modelling concept developed in collaboration between DTU and DHI. The modelling concept is used in morphological modelling in coastal areas where the governing sediment transport processes are due to wave action. The modelling concept...... is defined: Hybrid morphological modelling and it is based on coupling calculated sediment transport fields from a traditional process based coastal area model with a parametrised morphological evolution model. The focus of this study is to explore possible parametric formulations of the morphological...... solution has a two dimensional nature. 1.5D shoreline model A so-called “1.5D” implementation which introduces redistribution of sediment within a coastal profile in response to horizontal 2D currents makes it possible to simulate the morphological development in areas where 2D evolution occurs...

  19. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  20. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  1. Modelling long-term oil price and extraction with a Hubbert approach: The LOPEX model

    International Nuclear Information System (INIS)

    Rehrl, Tobias; Friedrich, Rainer

    2006-01-01

    The LOPEX (Long-term Oil Price and EXtraction) model generates long-term scenarios about future world oil supply and corresponding price paths up to the year 2100. In order to determine oil production in non-OPEC countries, the model uses Hubbert curves. Hubbert curves reflect the logistic nature of the discovery process and the associated constraint on temporal availability of oil. Extraction paths and world oil price path are both derived endogenously from OPEC's intertemporally optimal cartel behaviour. Thereby OPEC is faced with both the price-dependent production of the non-OPEC competitive fringe and the price-dependent world oil demand. World oil demand is modelled with a constant price elasticity function and refers to a scenario from ACROPOLIS-POLES. LOPEX results indicate a significant higher oil price from around 2020 onwards compared to the reference scenario, and a stagnating market share of maximal 50% to be optimal for OPEC

  2. Slope constrained Topology Optimization

    DEFF Research Database (Denmark)

    Petersson, J.; Sigmund, Ole

    1998-01-01

    The problem of minimum compliance topology optimization of an elastic continuum is considered. A general continuous density-energy relation is assumed, including variable thickness sheet models and artificial power laws. To ensure existence of solutions, the design set is restricted by enforcing...

  3. Quantity Constrained General Equilibrium

    NARCIS (Netherlands)

    Babenko, R.; Talman, A.J.J.

    2006-01-01

    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  4. Constraining curvatonic reheating

    Energy Technology Data Exchange (ETDEWEB)

    Hardwick, Robert J.; Vennin, Vincent; Koyama, Kazuya; Wands, David, E-mail: robert.hardwick@port.ac.uk, E-mail: vincent.vennin@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: david.wands@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom)

    2016-08-01

    We derive the first systematic observational constraints on reheating in models of inflation where an additional light scalar field contributes to primordial density perturbations and affects the expansion history during reheating. This encompasses the original curvaton model but also covers a larger class of scenarios. We find that, compared to the single-field case, lower values of the energy density at the end of inflation and of the reheating temperature are preferred when an additional scalar field is introduced. For instance, if inflation is driven by a quartic potential, which is one of the most favoured models when a light scalar field is added, the upper bound T {sub reh} < 5 × 10{sup 4} GeV on the reheating temperature T {sub reh} is derived, and the implications of this value on post-inflationary physics are discussed. The information gained about reheating is also quantified and it is found that it remains modest in plateau inflation (though still larger than in the single-field version of the model) but can become substantial in quartic inflation. The role played by the vev of the additional scalar field at the end of inflation is highlighted, and opens interesting possibilities for exploring stochastic inflation effects that could determine its distribution.

  5. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  6. D-term contributions and CEDM constraints in E6 × SU(2)F × U(1)A SUSY GUT model

    Science.gov (United States)

    Shigekami, Yoshihiro

    2017-11-01

    We focus on E6 × SU(2)F × U(1)A supersymmetric (SUSY) grand unified theory (GUT) model. In this model, realistic Yukawa hierarchies and mixings are realized by introducing all allowed interactions with 𝓞(1) coefficients. Moreover, we can take stop mass is smaller than the other sfermion masses. This type of spectrum called by natural SUSY type sfermion mass spectrum can suppress the SUSY contributions to flavor changing neutral current (FCNC) and stabilize weak scale at the same time. However, light stop predicts large up quark CEDM and stop contributions are not decoupled. Since there is Kobayashi-Maskawa phase, stop contributions to the up quark CEDM is severely constrained even if all SUSY breaking parameters and Higgsino mass parameter μ are real. In this model, real up Yukawa couplings are realized at the GUT scale because of spontaneous CP violation. Therefore CEDM bounds are satisfied, although up Yukawa couplings are complex at the SUSY scale through the renormalization equation group effects. We calculated the CEDMs and found that EDM constraints can be satisfied even if stop mass is 𝓞(1) TeV. In addition, we investigate the size of D-terms in this model. Since these D-term contributions is flavor dependent, the degeneracy of sfermion mass spectrum is destroyed and the size of D-term is strongly constrained by FCNCs when SUSY breaking scale is the weak scale. However, SUSY breaking scale is larger than 1 TeV in order to obtain 125 GeV Higgs mass, and therefore sizable D-term contribution is allowed. Furthermore, we obtained the non-trivial prediction for the difference of squared sfermion mass.

  7. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  8. A long term model of circulation. [human body

    Science.gov (United States)

    White, R. J.

    1974-01-01

    A quantitative approach to modeling human physiological function, with a view toward ultimate application to long duration space flight experiments, was undertaken. Data was obtained on the effect of weightlessness on certain aspects of human physiological function during 1-3 month periods. Modifications in the Guyton model are reviewed. Design considerations for bilateral interface models are discussed. Construction of a functioning whole body model was studied, as well as the testing of the model versus available data.

  9. Essays on financial econometrics : modeling the term structure of interest rates

    NARCIS (Netherlands)

    Bouwman, Kees Evert

    2008-01-01

    This dissertation bundles five studies in financial econometrics that are related to the theme of modeling the term structure of interest rates. The main contribution of this dissertation is a new arbitrage-free term structure model that is applied in an empirical analysis of the US term structure.

  10. Location constrained resource interconnection

    International Nuclear Information System (INIS)

    Hawkins, D.

    2008-01-01

    This presentation discussed issues related to wind integration from the perspective of the California Independent System Operator (ISO). Issues related to transmission, reliability, and forecasting were reviewed. Renewable energy sources currently used by the ISO were listed, and details of a new transmission financing plan designed to address the location constraints of renewable energy sources and provide for new transmission infrastructure was presented. The financing mechanism will be financed by participating transmission owners through revenue requirements. New transmission interconnections will include network facilities and generator tie-lines. Tariff revisions have also been implemented to recover the costs of new facilities and generators. The new transmission project will permit wholesale transmission access to areas where there are significant energy resources that are not transportable. A rate impact cap of 15 per cent will be imposed on transmission owners to mitigate short-term costs to ratepayers. The presentation also outlined energy resource area designation plans, renewable energy forecasts, and new wind technologies. Ramping issues were also discussed. It was concluded that the ISO expects to ensure that 20 per cent of its energy will be derived from renewable energy sources. tabs., figs

  11. Using High Resolution Simulations with WRF/SSiB Regional Climate Model Constrained by In Situ Observations to Assess the Impacts of Dust in Snow in the Upper Colorado River Basin

    Science.gov (United States)

    Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.

    2015-12-01

    The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.

  12. In vitro-analysis of kinematics and intradiscal pressures in cervical arthroplasty versus fusion--A biomechanical study in a sheep model with two semi-constrained prosthesis.

    Science.gov (United States)

    Daentzer, Dorothea; Welke, Bastian; Hurschler, Christof; Husmann, Nathalie; Jansen, Christina; Flamme, Christian Heinrich; Richter, Berna Ida

    2015-03-24

    As an alternative technique to arthrodesis of the cervical spine, total disc replacement (TDR) has increasingly been used with the aim of restoration of the physiological function of the treated and adjacent motions segments. The purpose of this experimental study was to analyze the kinematics of the target level as well as of the adjacent segments, and to measure the pressures in the proximal and distal disc after arthrodesis as well as after arthroplasty with two different semi-constrained types of prosthesis. Twelve cadaveric ovine cervical spines underwent polysegmental (C2-5) multidirectional flexibility testing with a sensor-guided industrial serial robot. Additionally, pressures were recorded in the proximal and distal disc. The following three conditions were tested: (1) intact specimen, (2) single-level arthrodesis C3/4, (3) single-level TDR C3/4 using the Discover® in the first six specimens and the activ® C in the other six cadavers. Statistical analysis was performed for the total range of motion (ROM), the intervertebral ROM (iROM) and the intradiscal pressures (IDP) to compare both the three different conditions as well as the two disc prosthesis among each other. The relative iROM in the target level was always lowered after fusion in the three directions of motion. In almost all cases, the relative iROM of the adjacent segments was almost always higher compared to the physiologic condition. After arthroplasty, we found increased relative iROM in the treated level in comparison to intact state in almost all cases, with relative iROM in the adjacent segments observed to be lower in almost all situations. The IDP in both adjacent discs always increased in flexion and extension after arthrodesis. In all but five cases, the IDP in each of the adjacent level was decreased below the values of the intact specimens after TDR. Overall, in none of the analyzed parameters were statistically significantly differences between both types of prostheses

  13. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  14. Ukraine National Energy Current State and Modelling its Long-Term Development

    International Nuclear Information System (INIS)

    Shulzhenko, S.

    2016-01-01

    Structure of Ukrainian energy sector, its current challenges, drivers of its development and possible long-term pathways, and methodological approaches and methods of mathematical modelling of long-term national energy development.(author).

  15. Constraining monodromy inflation

    International Nuclear Information System (INIS)

    Peiris, Hiranya V.; Easther, Richard; Flauger, Raphael

    2013-01-01

    We use cosmic microwave background (CMB) data from the 9-year WMAP release to derive constraints on monodromy inflation, which is characterized by a linear inflaton potential with a periodic modulation. We identify two possible periodic modulations that significantly improve the fit, lowering χ 2 by approximately 10 and 20. However, standard Bayesian model selection criteria assign roughly equal odds to the modulated potential and the unmodulated case. A modulated inflationary potential can generate substantial primordial non-Gaussianity with a specific and characteristic form. For the best-fit parameters to the WMAP angular power spectrum, the corresponding non-Gaussianity might be detectable in upcoming CMB data, allowing nontrivial consistency checks on the predictions of a modulated inflationary potential

  16. Risk management under a two-factor model of the term structure of interest rates

    OpenAIRE

    Manuel Moreno

    1997-01-01

    This paper presents several applications to interest rate risk management based on a two-factor continuous-time model of the term structure of interest rates previously presented in Moreno (1996). This model assumes that default free discount bond prices are determined by the time to maturity and two factors, the long-term interest rate and the spread (difference between the long-term rate and the short-term (instantaneous) riskless rate). Several new measures of ``generalized duration" are p...

  17. Uncertainty Assessment in Long Term Urban Drainage Modelling

    DEFF Research Database (Denmark)

    Thorndahl, Søren

    the probability of system failures (defined as either flooding or surcharge of manholes or combined sewer overflow); (2) an application of the Generalized Likelihood Uncertainty Estimation methodology in which an event based stochastic calibration is performed; and (3) long term Monte Carlo simulations...

  18. A Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success

    Science.gov (United States)

    Luong, Ming; Stevens, Jeff

    2015-01-01

    The Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success, a theoretical stages-of-growth model, explains long-term success in IT outsourcing relationships. Research showed the IT outsourcing relationship life cycle consists of four distinct, sequential stages: contract, transition, support, and partnership. The model was…

  19. Long-term Morphological Modeling at Coastal Inlets

    Science.gov (United States)

    2015-05-15

    that of Humboldt Bay, CA. The model reproduces reasonably well several geomorphic and hydrodynamic features of the inlet at Humboldt Bay. The...geometries, and model setup (e.g., sediment transport formulas) to investigate the controlling geomorphic parameters and the applicability of the CMS...2015 9 The model reproduces the general geomorphic features of Humboldt Bay. The ebb shoal volume is in the lower range of the estimated amount

  20. Modeling Long-Term Fluvial Incision : Shall we Care for the Details of Short-Term Fluvial Dynamics?

    Science.gov (United States)

    Lague, D.; Davy, P.

    2008-12-01

    Fluvial incision laws used in numerical models of coupled climate, erosion and tectonics systems are mainly based on the family of stream power laws for which the rate of local erosion E is a power function of the topographic slope S and the local mean discharge Q : E = K Qm Sn. The exponents m and n are generally taken as (0.35, 0.7) or (0.5, 1), and K is chosen such that the predicted topographic elevation given the prevailing rates of precipitation and tectonics stay within realistic values. The resulting topographies are reasonably realistic, and the coupled system dynamics behaves somehow as expected : more precipitation induces increased erosion and localization of the deformation. Yet, if we now focus on smaller scale fluvial dynamics (the reach scale), recent advances have suggested that discharge variability, channel width dynamics or sediment flux effects may play a significant role in controlling incision rates. These are not factored in the simple stream power law model. In this work, we study how these short- term details propagate into long-term incision dynamics within the framework of surface/tectonics coupled numerical models. To upscale the short term dynamics to geological timescales, we use a numerical model of a trapezoidal river in which vertical and lateral incision processes are computed from fluid shear stress at a daily timescale, sediment transport and protection effects are factored in, as well as a variable discharge. We show that the stream power law model might still be a valid model but that as soon as realistic effects are included such as a threshold for sediment transport, variable discharge and dynamic width the resulting exponents m and n can be as high as 2 and 4. This high non-linearity has a profound consequence on the sensitivity of fluvial relief to incision rate. We also show that additional complexity does not systematically translates into more non-linear behaviour. For instance, considering only a dynamical width

  1. Loss terms in free-piston Stirling engine models

    Science.gov (United States)

    Gordon, Lloyd B.

    1992-01-01

    Various models for free piston Stirling engines are reviewed. Initial models were developed primarily for design purposes and to predict operating parameters, especially efficiency. More recently, however, such models have been used to predict engine stability. Free piston Stirling engines have no kinematic constraints and stability may not only be sensitive to the load, but also to various nonlinear loss and spring constraints. The present understanding is reviewed of various loss mechanisms for free piston Stirling engines and how they have been incorporated into engine models is discussed.

  2. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    Science.gov (United States)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1

  3. Testing Affine Term Structure Models in Case of Transaction Costs

    NARCIS (Netherlands)

    Driessen, J.J.A.G.; Melenberg, B.; Nijman, T.E.

    1999-01-01

    In this paper we empirically analyze the impact of transaction costs on the performance of affine interest rate models. We test the implied (no arbitrage) Euler restrictions, and we calculate the specification error bound of Hansen and Jagannathan to measure the extent to which a model is

  4. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  5. A fuzzy inference model for short-term load forecasting

    International Nuclear Information System (INIS)

    Mamlook, Rustum; Badran, Omar; Abdulhadi, Emad

    2009-01-01

    This paper is concerned with the short-term load forecasting (STLF) in power system operations. It provides load prediction for generation scheduling and unit commitment decisions, and therefore precise load forecasting plays an important role in reducing the generation cost and the spinning reserve capacity. Short-term electricity demand forecasting (i.e., the prediction of hourly loads (demand)) is one of the most important tools by which an electric utility/company plans, dispatches the loading of generating units in order to meet system demand. The accuracy of the dispatching system, which is derived from the accuracy of the forecasting algorithm used, will determine the economics of the operation of the power system. The inaccuracy or large error in the forecast simply means that load matching is not optimized and consequently the generation and transmission systems are not being operated in an efficient manner. In the present study, a proposed methodology has been introduced to decrease the forecasted error and the processing time by using fuzzy logic controller on an hourly base. Therefore, it predicts the effect of different conditional parameters (i.e., weather, time, historical data, and random disturbances) on load forecasting in terms of fuzzy sets during the generation process. These parameters are chosen with respect to their priority and importance. The forecasted values obtained by fuzzy method were compared with the conventionally forecasted ones. The results showed that the STLF of the fuzzy implementation have more accuracy and better outcomes