WorldWideScience

Sample records for model termed constrained

  1. A constrained model for MSMA

    Energy Technology Data Exchange (ETDEWEB)

    Capella, Antonio [Instituto de Matematicas, Universidad Nacional Autonoma de Mexico (Mexico); Mueller, Stefan [Hausdorff Center for Mathematics and Institute for Applied Mathematics, Universitaet Bonn (Germany); Otto, Felix [Max Planck Institute for Mathematics in the Sciences, Leipzig (Germany)

    2012-08-15

    A mathematical description of transformation processes in magnetic shape memory alloys (MSMA) under applied stresses and external magnetic fields needs a combination of micromagnetics and continuum elasticity theory. In this note, we discuss the so-called constrained theories, i.e., models where the state described by the pair (linear strain, magnetization) is at every point of the sample constrained to assume one of only finitely many values (that reflect the material symmetries). Furthermore, we focus on large body limits, i.e., models that are formulated in terms of (local) averages of a microstructured state, as the one proposed by DeSimone and James. We argue that the effect of an interfacial energy associated with the twin boundaries survives on the level of the large body limit in form of a (local) rigidity of twins. This leads to an alternative (i.e., with respect to reference 1) large body limit. The new model has the advantage of qualitatively explaining the occurrence of a microstructure with charged magnetic walls, as observed in SPP experiments in reference 2. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  2. Constrains on Timeon Model

    CERN Document Server

    Araki, Takeshi

    2009-01-01

    The timeon model recently proposed by Friedberg and Lee has a potential problem of flavor changing neutral currents (FCNCs) if the mass of the timeon is small. In order to avoid the problem without fine-tuning, we introduce a small dimensionless parameter to suppress FCNCs. Even in this case, we find that the timeon mass must be larger than 151 GeV to satisfy all the constraints from processes involving FCNCs in the quark sectors. We also extend the timeon model to the lepton sector and examine the leptonic processes.

  3. Observational constrains on the DGP brane-world model with a Gauss-Bonnet term in the bulk

    Energy Technology Data Exchange (ETDEWEB)

    He Jianhua [Department of Physics, Fudan University, 200433 Shanghai (China); Wang Bin [Department of Physics, Fudan University, 200433 Shanghai (China)], E-mail: wangb@fudan.edu.cn; Papantonopoulos, Eleftherios [Department of Physics, National Technical University of Athens, GR 157 73, Athens (Greece)], E-mail: lpapa@central.ntua.gr

    2007-10-18

    Using the data coming from the new 182 Gold type Ia supernova samples, the baryon acoustic oscillation measurement from the Sloan Digital Sky Survey and the H(z) data, we have performed a statistical joint analysis of the DGP brane-world model with a high curvature Gauss-Bonnet term in the bulk. Consistent parameters estimations show that the Gauss-Bonnet-induced gravity model is a viable candidate to explain the observed acceleration of our universe.

  4. Bayesian Constrained-Model Selection for Factor Analytic Modeling

    OpenAIRE

    Peeters, Carel F.W.

    2016-01-01

    My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...

  5. Parametrization consequences of constraining soil organic matter models by total carbon and radiocarbon using long-term field data

    Science.gov (United States)

    Menichetti, Lorenzo; Kätterer, Thomas; Leifeld, Jens

    2016-05-01

    Soil organic carbon (SOC) dynamics result from different interacting processes and controls on spatial scales from sub-aggregate to pedon to the whole ecosystem. These complex dynamics are translated into models as abundant degrees of freedom. This high number of not directly measurable variables and, on the other hand, very limited data at disposal result in equifinality and parameter uncertainty. Carbon radioisotope measurements are a proxy for SOC age both at annual to decadal (bomb peak based) and centennial to millennial timescales (radio decay based), and thus can be used in addition to total organic C for constraining SOC models. By considering this additional information, uncertainties in model structure and parameters may be reduced. To test this hypothesis we studied SOC dynamics and their defining kinetic parameters in the Zürich Organic Fertilization Experiment (ZOFE) experiment, a > 60-year-old controlled cropland experiment in Switzerland, by utilizing SOC and SO14C time series. To represent different processes we applied five model structures, all stemming from a simple mother model (Introductory Carbon Balance Model - ICBM): (I) two decomposing pools, (II) an inert pool added, (III) three decomposing pools, (IV) two decomposing pools with a substrate control feedback on decomposition, (V) as IV but with also an inert pool. These structures were extended to explicitly represent total SOC and 14C pools. The use of different model structures allowed us to explore model structural uncertainty and the impact of 14C on kinetic parameters. We considered parameter uncertainty by calibrating in a formal Bayesian framework. By varying the relative importance of total SOC and SO14C data in the calibration, we could quantify the effect of the information from these two data streams on estimated model parameters. The weighing of the two data streams was crucial for determining model outcomes, and we suggest including it in future modeling efforts whenever SO14C

  6. Observational constrains on a decaying cosmological term

    CERN Document Server

    Nakamura, R; Ichiki, K; Nakamura, Riou; Hashimoto, Masa-aki; Ichiki, Kiyotomo

    2006-01-01

    We investigate the evolution of a universe with a decaying cosmological term (vacuum energy) that is assumed to be a function of the scale factor. In this model, while the cosmological term increases to the early universe, the radiation energy density is lower than the model with the cosmological "constant". We find that the effects of the decaying cosmological term on the expansion rate at the redshift z<2 is negligible. However, the decrease in the radiation density affects on the thermal history of the universe; e.g. the photon decoupling occurs at higher $z$ compared to the case of the standard \\Lambda CDM model. As a consequence, a decaying cosmological term affects on the cosmic microwave background anisotropy. We show the angular power spectrum in D\\Lambda CDM model and compare with the Wilkinson Microwave Anisotropy Probe (WMAP) data.

  7. Inverse modeling of the (137)Cs source term of the Fukushima Dai-ichi Nuclear Power Plant accident constrained by a deposition map monitored by aircraft.

    Science.gov (United States)

    Yumimoto, Keiya; Morino, Yu; Ohara, Toshimasa; Oura, Yasuji; Ebihara, Mitsuru; Tsuruta, Haruo; Nakajima, Teruyuki

    2016-11-01

    The amount of (137)Cs released by the Fukushima Dai-ichi Nuclear Power Plant accident of 11 March 2011 was inversely estimated by integrating an atmospheric dispersion model, an a priori source term, and map of deposition recorded by aircraft. An a posteriori source term refined finer (hourly) variations comparing with the a priori term, and estimated (137)Cs released 11 March to 2 April to be 8.12 PBq. Although time series of the a posteriori source term was generally similar to those of the a priori source term, notable modifications were found in the periods when the a posteriori source term was well-constrained by the observations. Spatial pattern of (137)Cs deposition with the a posteriori source term showed better agreement with the (137)Cs deposition monitored by aircraft. The a posteriori source term increased (137)Cs deposition in the Naka-dori region (the central part of Fukushima Prefecture) by 32.9%, and considerably improved the underestimated a priori (137)Cs deposition. Observed values of deposition measured at 16 stations and surface atmospheric concentrations collected on a filter tape of suspended particulate matter were used for validation of the a posteriori results. A great improvement was found in surface atmospheric concentration on 15 March; the a posteriori source term reduced root mean square error, normalized mean error, and normalized mean bias by 13.4, 22.3, and 92.0% for the hourly values, respectively. However, limited improvements were observed in some periods and areas due to the difficulty in simulating accurate wind fields and the lack of the observational constraints.

  8. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini

    A mesoscale numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element method for calculating stresses. The sintering behavior of a sample constrained by a rigid substrate ...

  9. Reflections on How Color Term Acquisition Is Constrained

    Science.gov (United States)

    Pitchford, Nicola J.

    2006-01-01

    Compared with object word learning, young children typically find learning color terms to be a difficult linguistic task. In this reflections article, I consider two questions that are fundamental to investigations into the developmental acquisition of color terms. First, I consider what constrains color term acquisition and how stable these…

  10. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini

    2014-01-01

    as well as the FEM calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of a sample constrained by a rigid substrate is simulated. The constrained sintering result in a larger number of pores near the substrate, as well as anisotropic sintering shrinkage......A numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...

  11. Modeling the Microstructural Evolution During Constrained Sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini

    2015-01-01

    as well as the FEM calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of a sample constrained by a rigid substrate is simulated. The constrained sintering results in a larger number of pores near the substrate, as well as anisotropic sintering shrinkage......A numerical model able to simulate solid-state constrained sintering is presented. The model couples an existing kinetic Monte Carlo model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...

  12. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number......A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...

  13. Constraining groundwater modeling with magnetic resonance soundings.

    Science.gov (United States)

    Boucher, Marie; Favreau, Guillaume; Nazoumou, Yahaya; Cappelaere, Bernard; Massuel, Sylvain; Legchenko, Anatoly

    2012-01-01

    Magnetic resonance sounding (MRS) is a noninvasive geophysical method that allows estimating the free water content and transmissivity of aquifers. In this article, the ability of MRS to improve the reliability of a numerical groundwater model is assessed. Thirty-five sites were investigated by MRS over a ∼5000 km(2) domain of the sedimentary Continental Terminal aquifer in SW Niger. Time domain electromagnetic soundings were jointly carried out to estimate the aquifer thickness. A groundwater model was previously built for this section of the aquifer and forced by the outputs from a distributed surface hydrology model, to simulate the observed long-term (1992 to 2003) rise in the water table. Uncertainty analysis had shown that independent estimates of the free water content and transmissivity values of the aquifer would facilitate cross-evaluation of the surface-water and groundwater models. MRS results indicate ranges for permeability (K = 1 × 10(-5) to 3 × 10(-4) m/s) and for free water content (w = 5% to 23% m(3) /m(3) ) narrowed by two orders of magnitude (K) and by ∼50% (w), respectively, compared to the ranges of permeability and specific yield values previously considered. These shorter parameter ranges result in a reduction in the model's equifinality (whereby multiple combinations of model's parameters are able to represent the same observed piezometric levels), allowing a better constrained estimate to be derived for net aquifer recharge (∼22 mm/year).

  14. Inequality constrained normal linear models

    NARCIS (Netherlands)

    Klugkist, I.G.

    2005-01-01

    This dissertation deals with normal linear models with inequality constraints among model parameters. It consists of an introduction and four chapters that are papers submitted for publication. The first chapter introduces the use of inequality constraints. Scientists often have one or more theories

  15. Cardinality constrained portfolio selection via factor models

    OpenAIRE

    Monge, Juan Francisco

    2017-01-01

    In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...

  16. The medium term outcome of the Omnifit constrained acetabular cup.

    Science.gov (United States)

    Bigsby, Ewan; Whitehouse, Michael R; Bannister, Gordon C; Blom, Ashley W

    2012-01-01

    Recurrent dislocation requiring revision surgery occurs in approximately 4% of primary total hip arthroplasties (THAs). To reduce this risk, or to treat those patients who recurrently dislocate, a constrained acetabular component may be used, however there are concerns over the success of such components due to increased mechanical stresses. The purpose of this study was to analyse the survivorship and radiological results for the Omnifit constrained acetabular component, providing a longer patient reported outcome follow-up than previous studies. 117 patients (median age 82 years) underwent a THA with an Omnifit constrained acetabular component. Of these, 45 were primary replacements and 72 were revisions. Survivorship analysis was performed and patients were assessed both radiologically and functionally. At follow-up, 53 patients (45.3%) had died at a median time of 33 months from operation. The median overall follow-up was 7.0 (5.5-8.2) years. Survivors (median age 83 years) reported a median Oxford Hip Score (OHS) of 16.6 (0-48), 87.8% were satisfied with their surgery. 45 (91.8%) of the acetabular components were stable radiologically, 48 (96%) of the femoral components were stable (5 uncemented, 43 cemented) and two possibly unstable. Four of the 117 patients underwent further surgery. Only one required revision of the prosthesis and this was for a periprosthetic fracture. In the medium term the Omnifit constrained acetabular component prevents dislocation and does not cause excessive loosening of either the acetabular or femoral components in our patient population. Our results support the use of the Omnifit constrained acetabular component in elderly patients at risk of dislocation with low functional demand.

  17. A constrained supersymmetric left-right model

    CERN Document Server

    Hirsch, Martin; Opferkuch, Toby; Porod, Werner; Staub, Florian

    2016-01-01

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model's capability to explain current anomalies observed at the LHC.

  18. Cosmogenic photons strongly constrain UHECR source models

    CERN Document Server

    van Vliet, Arjen

    2016-01-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  19. Cosmogenic photons strongly constrain UHECR source models

    Science.gov (United States)

    van Vliet, Arjen

    2017-03-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  20. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  1. The canonical equilibrium of constrained molecular models

    CERN Document Server

    Echenique, Pablo; García-Risueño, Pablo

    2011-01-01

    In order to increase the efficiency of the computer simulation of biological molecules, it is very common to impose holonomic constraints on the fastest degrees of freedom; normally bond lengths, but also possibly bond angles. However, as any other element that affects the physical model, the imposition of constraints must be assessed from the point of view of accuracy: both the dynamics and the equilibrium statistical mechanics are model-dependent, and they will be changed if constraints are used. In this review, we investigate the accuracy of constrained models at the level of the equilibrium statistical mechanics distributions produced by the different dynamics. We carefully derive the canonical equilibrium distributions of both the constrained and unconstrained dynamics, comparing the two of them by means of a "stiff" approximation to the latter. We do so both in the case of flexible and hard constraints, i.e., when the value of the constrained coordinates depends on the conformation and when it is a cons...

  2. Mantle Convection Models Constrained by Seismic Tomography

    Science.gov (United States)

    Durbin, C. J.; Shahnas, M.; Peltier, W. R.; Woodhouse, J. H.

    2011-12-01

    Perovskite-post-Perovskite transition (Murakami et al., 2004, Science) that appears to define the D" layer at the base of the mantle. In this initial phase of what will be a longer term project we are assuming that the internal mantle viscosity structure is spherically symmetric and compatible with the recent inferences of Peltier and Drummond (2010, Geophys. Res. Lett.) based upon glacial isostatic adjustment and Earth rotation constraints. The internal density structure inferred from the tomography model is assimilated into the convection model by continuously "nudging" the modification to the input density structure predicted by the convection model back towards the tomographic constraint at the long wavelengths that the tomography specifically resolves, leaving the shorter wavelength structure free to evolve, essentially "slaved" to the large scale structure. We focus upon the ability of the nudged model to explain observed plate velocities, including both their poloidal (divergence related) and toroidal (strike slip fault related) components. The true plate velocity field is then used as an additional field towards which the tomographically constrained solution is nudged.

  3. Balance of payments constrained growth models: history and overview

    Directory of Open Access Journals (Sweden)

    Anthony P. Thirlwall

    2011-12-01

    Full Text Available Thirlwall’s 1979 balance of payments constrained growth model predicts that a country’s long run growth of GDP can be approximated by the ratio of the growth of real exports to the income elasticity of demand for imports assuming negligible effects from real exchange rate movements. The paper surveys developments of the model since then, allowing for capital flows, interest payments on debt, terms of trade movements, and disaggregation of the model by commodities and trading partners. Various tests of the model are discussed, and an extensive list of papers that have examined the model is presented.

  4. QCD strings as constrained grassmannian sigma model

    CERN Document Server

    Viswanathan, K S; Viswanathan, K S; Parthasarathy, R

    1995-01-01

    We present calculations for the effective action of string world sheet in R3 and R4 utilizing its correspondence with the constrained Grassmannian sigma model. Minimal surfaces describe the dynamics of open strings while harmonic surfaces describe that of closed strings. The one-loop effective action for these are calculated with instanton and anti-instanton background, reprsenting N-string interactions at the tree level. The effective action is found to be the partition function of a classical modified Coulomb gas in the confining phase, with a dynamically generated mass gap.

  5. A Constrained CA Model for Planning Simulation Incorporating Institutional Constraints

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In recent years,it is prevailing to simulate urban growth by means of cellular automata (CA in short) modeling,which is based on selforganizing theories and different from the system dynamic modeling.Since the urban system is definitely complex,the CA models applied in urban growth simulation should take into consideration not only the neighborhood influence,but also other factors influencing urban development.We bring forward the term of complex constrained CA (CC-CA in short) model,which integrates the constrained conditions of neighborhood,macro socio-economy,space and institution.Particularly,the constrained construction zoning,as one institutional constraint,is considered in the CC-CA modeling.In the paper,the conceptual CC-CA model is introduced together with the transition rules.Based on the CC-CA model for Beijing,we discuss the complex constraints to the urban development of,and we show how to set institutional constraints in planning scenario to control the urban growth pattern of Beijing.

  6. Constraining anisotropic models of early Universe with WMAP9 data

    CERN Document Server

    Ramazanov, Sabir

    2013-01-01

    We constrain several models of the early Universe that predict statistical anisotropy of the CMB sky. We make use of WMAP9 maps deconvolved with beam asymmetries. As compared to previous releases of WMAP data, they do not exhibit the anomalously large quadrupole of the statistical anisotropy. This allows to strengthen limits on parameters of models established earlier in literature. In particular, the amplitude of the special quadrupole, whose direction is aligned with ecliptic poles, is now constrained as g_* =0.002 \\pm 0.041 at 95% CL (\\pm 0.020 at 68% CL). The upper limit is obtained on the total number of e-folds in anisotropic inflation with the Maxwellian term non-minimally coupled to the inflaton, namely N_{tot} constrain models of (pseudo)-Conformal Universe. The strongest constraint is obtained for spectator scenarios involving a long stage of subhorizon evolution after conformal rolling, which reads h^2 < 0.006 at 95% CL, in terms ...

  7. Constraining Cosmological Models with Different Observations

    Science.gov (United States)

    Wei, J. J.

    2016-07-01

    With the observations of Type Ia supernovae (SNe Ia), scientists discovered that the Universe is experiencing an accelerated expansion, and then revealed the existence of dark energy in 1998. Since the amazing discovery, cosmology has became a hot topic in the physical research field. Cosmology is a subject that strongly depends on the astronomical observations. Therefore, constraining different cosmological models with all kinds of observations is one of the most important research works in the modern cosmology. The goal of this thesis is to investigate cosmology using the latest observations. The observations include SNe Ia, Type Ic Super Luminous supernovae (SLSN Ic), Gamma-ray bursts (GRBs), angular diameter distance of galaxy cluster, strong gravitational lensing, and age measurements of old passive galaxies, etc. In Chapter 1, we briefly review the research background of cosmology, and introduce some cosmological models. Then we summarize the progress on cosmology from all kinds of observations in more details. In Chapter 2, we present the results of our studies on the supernova cosmology. The main difficulty with the use of SNe Ia as standard candles is that one must optimize three or four nuisance parameters characterizing SN luminosities simultaneously with the parameters of an expansion model of the Universe. We have confirmed that one should optimize all of the parameters by carrying out the method of maximum likelihood estimation in any situation where the parameters include an unknown intrinsic dispersion. The commonly used method, which estimates the dispersion by requiring the reduced χ^{2} to equal unity, does not take into account all possible variances among the parameters. We carry out such a comparison of the standard ΛCDM cosmology and the R_{h}=ct Universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. Moreover, it is quite evident that SLSNe Ic may be useful

  8. Logical consistency and sum-constrained linear models

    NARCIS (Netherlands)

    van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.

    2006-01-01

    A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a

  9. Generation of Granulites Constrained by Thermal Modeling

    Science.gov (United States)

    Depine, G. V.; Andronicos, C. L.; Phipps-Morgan, J.

    2006-12-01

    The heat source needed to generate granulites facies metamorphism is still an unsolved problem in geology. There is a close spatial relationship between granulite terrains and extensive silicic plutonism, suggesting heat advection by melts is critical to their formation. To investigate the role of heat advection by melt in the generation of granulites we use numerical 1-D models which include the movement of melt from the base of the crust to the middle crust. The model is in part constrained by petrological observations from the Coast Plutonic Complex (CPC) in British Columbia, Canada at ~ 54° N where migmatite and granulite are widespread. The model takes into account time dependent heat conduction and advection of melts generated at the base of the crust. The model starts with a crust of 55 km, consistent with petrologic and geochemical data from the CPC. The lower crust is assumed to be amphibolite in composition, consistent with seismologic and geochemical constraints for the CPC. An initial geothermal gradient estimated from metamorphic P-T-t paths in this region is ~37°C/km, hotter than normal geothermal gradients. The parameters used for the model are a coefficient of thermal conductivity of 2.5 W/m°C, a density for the crust of 2700 kg/m3 and a heat capacity of 1170 J/Kg°C. Using the above starting conditions, a temperature of 1250°C is assumed for the mantle below 55 km, equivalent to placing asthenosphere in contact with the base of the crust to simulate delamination, basaltic underplating and/or asthenospheric exposure by a sudden steepening of slab. This condition at 55 km results in melting the amphibolite in the lower crust. Once a melt fraction of 10% is reached the melt is allowed to migrate to a depth of 13 km, while material at 13 km is displaced downwards to replace the ascending melts. The steady-state profile has a very steep geothermal gradient of more than 50°C/km from the surface to 13 km, consistent with the generation of andalusite

  10. Constraining Logotropic Unified Dark Energy Models

    CERN Document Server

    Ferreira, V M C

    2016-01-01

    A unification of dark matter and dark energy in terms of a logotropic perfect dark fluid has recently been proposed, where deviations with respect to the standard $\\Lambda {\\rm CDM}$ model are dependent on a single parameter $B$. In this paper we show that the requirement that the linear growth of cosmic structures on comoving scales larger than $8 h^{-1} \\, {\\rm Mpc}$ is not significantly affected with respect to the standard $\\Lambda {\\rm CDM}$ result provides the strongest constraint to date on the model ($B <6 \\times 10^{-7}$), an improvement of more than three orders of magnitude over previous constraints on the value of $B$. We further show that this constraint rules out the logotropic Unified Dark Energy model as a possible solution to the small scale problems of the $\\Lambda$CDM model, including the cusp problem of Dark Matter halos or the missing satellite problem, as well as the original version of the model where the Planck energy density was taken as one of the two parameters characterizing the...

  11. DAE for Frictional Contact Modeling of Constrained Multi-Flexible Body Systems

    Institute of Scientific and Technical Information of China (English)

    Ray P.S.Han; S. G. Mao

    2004-01-01

    A general formulation for modeling frictional contact interactions in a constrained multi-flexible body system is outlined in this paper. The governing differential-algebraic equations (DAE) for the constrained motion contains not only a frictional term but also, the unknown contact conditions. These contact conditions are characterized by a set of nonlinear complementarity equations. To demonstrate the model, a falling-spinning beam impacting a rough elastic ground with damping is solved and comparison with Stewart-Trinkles' results provided.

  12. Dark matter candidates in the constrained exceptional supersymmetric standard model

    Science.gov (United States)

    Athron, P.; Thomas, A. W.; Underwood, S. J.; White, M. J.

    2017-02-01

    The exceptional supersymmetric standard model is a low energy alternative to the minimal supersymmetric standard model (MSSM) with an extra U (1 ) gauge symmetry and three generations of matter filling complete 27-plet representations of E6. This provides both new D and F term contributions that raise the Higgs mass at tree level, and a compelling solution to the μ -problem of the MSSM by forbidding such a term with the extra U (1 ) symmetry. Instead, an effective μ -term is generated from the vacuum expectation value of an SM singlet which breaks the extra U (1 ) symmetry at low energies, giving rise to a massive Z'. We explore the phenomenology of the constrained version of this model in substantially more detail than has been carried out previously, performing a ten dimensional scan that reveals a large volume of viable parameter space. We classify the different mechanisms for generating the measured relic density of dark matter found in the scan, including the identification of a new mechanism involving mixed bino/inert-Higgsino dark matter. We show which mechanisms can evade the latest direct detection limits from the LUX 2016 experiment. Finally we present benchmarks consistent with all the experimental constraints and which could be discovered with the XENON1T experiment.

  13. Stabilizing model predictive control for constrained nonlinear distributed delay systems.

    Science.gov (United States)

    Mahboobi Esfanjani, R; Nikravesh, S K Y

    2011-04-01

    In this paper, a model predictive control scheme with guaranteed closed-loop asymptotic stability is proposed for a class of constrained nonlinear time-delay systems with discrete and distributed delays. A suitable terminal cost functional and also an appropriate terminal region are utilized to achieve asymptotic stability. To determine the terminal cost, a locally asymptotically stabilizing controller is designed and an appropriate Lyapunov-Krasoskii functional of the locally stabilized system is employed as the terminal cost. Furthermore, an invariant set for locally stabilized system which is established by using the Razumikhin Theorem is used as the terminal region. Simple conditions are derived to obtain terminal cost and terminal region in terms of Bilinear Matrix Inequalities. The method is illustrated by a numerical example.

  14. Dark matter candidates in the constrained Exceptional Supersymmetric Standard Model

    CERN Document Server

    Athron, P; Underwood, S J; White, M J

    2016-01-01

    The Exceptional Supersymmetric Standard Model (E$_6$SSM) is a low energy alternative to the MSSM with an extra $U(1)$ gauge symmetry and three generations of matter filling complete 27-plet representations of $E_6$. This provides both new D and F term contributions that raise the Higgs mass at tree level, and a compelling solution to the $\\mu$-problem of the MSSM by forbidding such a term with the extra $U(1)$ symmetry. Instead, an effective $\\mu$-term is generated from the VEV of an SM singlet which breaks the extra $U(1)$ symmetry at low energies, giving rise to a massive $Z^\\prime$. We explore the phenomenology of the constrained version of this model (cE$_6$SSM) in substantially more detail than has been carried out previously, performing a ten dimensional scan that reveals a large volume of viable parameter space. We classify the different mechanisms for generating the measured relic density of dark matter found in the scan, including the identification of a new mechanism involving mixed bino/inert-Higgs...

  15. Constrained regression models for optimization and forecasting

    Directory of Open Access Journals (Sweden)

    P.J.S. Bruwer

    2003-12-01

    Full Text Available Linear regression models and the interpretation of such models are investigated. In practice problems often arise with the interpretation and use of a given regression model in spite of the fact that researchers may be quite "satisfied" with the model. In this article methods are proposed which overcome these problems. This is achieved by constructing a model where the "area of experience" of the researcher is taken into account. This area of experience is represented as a convex hull of available data points. With the aid of a linear programming model it is shown how conclusions can be formed in a practical way regarding aspects such as optimal levels of decision variables and forecasting.

  16. Testing inequality constrained hypotheses in SEM Models

    NARCIS (Netherlands)

    Van de Schoot, R.; Hoijtink, H.J.A.; Dekovic, M.

    2010-01-01

    Researchers often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model. It is currently not possible to test these so-called informative hypotheses in structural equation modeling software. We offer a solution to this probl

  17. Constraining models with a large scalar multiplet

    CERN Document Server

    Earl, Kevin; Logan, Heather E; Pilkington, Terry

    2013-01-01

    Models in which the Higgs sector is extended by a single electroweak scalar multiplet X can possess an accidental global U(1) symmetry at the renormalizable level if X has isospin T greater or equal to 2. We show that all such U(1)-symmetric models are excluded by the interplay of the cosmological relic density of the lightest (neutral) component of X and its direct detection cross section via Z exchange. The sole exception is the T=2 multiplet, whose lightest member decays on a few-day to few-year timescale via a Planck-suppressed dimension-5 operator.

  18. Using Diagnostic Text Information to Constrain Situation Models

    NARCIS (Netherlands)

    Dutke, S.; Baadte, C.; Hähnel, A.; Hecker, U. von; Rinck, M.

    2010-01-01

    During reading, the model of the situation described by the text is continuously accommodated to new text input. The hypothesis was tested that readers are particularly sensitive to diagnostic text information that can be used to constrain their existing situation model. In 3 experiments, adult part

  19. Constraining Numerical Geodynamo Modeling with Surface Observations

    Science.gov (United States)

    Kuang, Weijia; Tangborn, Andrew

    2006-01-01

    Numerical dynamo solutions have traditionally been generated entirely by a set of self-consistent differential equations that govern the spatial-temporal variation of the magnetic field, velocity field and other fields related to dynamo processes. In particular, those solutions are obtained with parameters very different from those appropriate for the Earth s core. Geophysical application of the numerical results therefore depends on correct understanding of the differences (errors) between the model outputs and the true states (truth) in the outer core. Part of the truth can be observed at the surface in the form of poloidal magnetic field. To understand these differences, or errors, we generate new initial model state (analysis) by assimilating sequentially the model outputs with the surface geomagnetic observations using an optimal interpolation scheme. The time evolution of the core state is then controlled by our MoSST core dynamics model. The final outputs (forecasts) are then compared with the surface observations as a means to test the success of the assimilation. We use the surface geomagnetic data back to year 1900 for our studies, with 5-year forecast and 20-year analysis periods. We intend to use the result; to understand time variation of the errors with the assimilation sequences, and the impact of the assimilation on other unobservable quantities, such as the toroidal field and the fluid velocity in the core.

  20. Infrared Constrains on AGN Tori Models

    CERN Document Server

    Hatziminaoglou, E

    2006-01-01

    This work focuses on the properties of dusty tori in active galactic nuclei (AGN) derived from the comparison of SDSS type 1 quasars with mid-Infrared (MIR) counterparts and a new, detailed torus model. The infrared data were taken by the Spitzer Wide-area InfraRed Extragalactic (SWIRE) Survey. Basic model parameters are constraint, such as the density law of the graphite and silicate grains, the torus size and its opening angle. A whole variety of optical depths is supported. The favoured models are those with decreasing density with distance from the centre, while there is no clear tendency as to the covering factor, i.e. small, medium and large covering factors are almost equally distributed. Based on the models that better describe the observed SEDs, properties such as the accretion luminosity, the mass of dust, the inner to outer radius ratio and the hydrogen column density are computed. The properties of the tori, as derived fitting the observed SEDs, are independent of the redshift, once observational ...

  1. A Novel Approach to Constraining Uncertain Stellar Evolution Models

    Science.gov (United States)

    Rosenfield, Philip; Girardi, Leo; Dalcanton, Julianne; Johnson, L. C.; Williams, Benjamin F.; Weisz, Daniel R.; Bressan, Alessandro; Fouesneau, Morgan

    2017-01-01

    Stellar evolution models are fundamental to nearly all studies in astrophysics. They are used to interpret spectral energy distributions of distant galaxies, to derive the star formation histories of nearby galaxies, and to understand fundamental parameters of exoplanets. Despite the success in using stellar evolution models, some important aspects of stellar evolution remain poorly constrained and their uncertainties rarely addressed. We present results using archival Hubble Space Telescope observations of 10 stellar clusters in the Magellanic Clouds to simultaneously constrain the values and uncertainties of the strength of core convective overshooting, metallicity, interstellar extinction, cluster distance, binary fraction, and age.

  2. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available and forth in order to develop enough momentum to swing the pendulum up. The state of the system, x = [x, v, θ, ω], is described by the position of the cart, the velocity of the cart, the angle of the pendulum and its angular velocity. A horizontal force u... by assuming a no-slip model. The state space is described by the vector [x, y, v, φ], where x and y denote the position of the car, v the lon- gitudinal velocity of the car, and φ the car’s orientation. The control signal consists of the PWM duty cycle...

  3. Constraining multi-Higgs flavour models

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Felipe, R.; Silva, Joao P. [Rua Conselheiro Emidio Navarro 1, Instituto Superior de Engenharia de Lisboa-ISEL, Lisbon (Portugal); Universidade de Lisboa, Centro de Fisica Teorica de Particulas (CFTP), Instituto Superior Tecnico, Lisbon (Portugal); Ivanov, I.P. [Universite de Liege, IFPA, Liege (Belgium); Sobolev Institute of Mathematics, Novosibirsk (Russian Federation); Ghent University, Department of Physics and Astronomy, Ghent (Belgium); Nishi, C.C. [Universidade Federal do ABC-UFABC, Santo Andre, SP (Brazil); Serodio, Hugo [Universitat de Valencia-CSIC, Departament de Fisica Teorica and IFIC, Burjassot (Spain)

    2014-07-15

    To study a flavour model with a non-minimal Higgs sector one must first define the symmetries of the fields; then identify what types of vacua exist and how they may break the symmetries; and finally determine whether the remnant symmetries are compatible with the experimental data. Here we address all these issues in the context of flavour models with any number of Higgs doublets. We stress the importance of analysing the Higgs vacuum expectation values that are pseudo-invariant under the generators of all subgroups. It is shown that the only way of obtaining a physical CKM mixing matrix and, simultaneously, non-degenerate and non-zero quark masses is requiring the vacuum expectation values of the Higgs fields to break completely the full flavour group, except possibly for some symmetry belonging to baryon number. The application of this technique to some illustrative examples, such as the flavour groups Δ(27), A{sub 4} and S{sub 3}, is also presented. (orig.)

  4. Constraining Emission Models of Luminous Blazar Sources

    Energy Technology Data Exchange (ETDEWEB)

    Sikora, Marek; /Warsaw, Copernicus Astron. Ctr.; Stawarz, Lukasz; /Kipac, Menlo Park /Jagiellonian U., Astron. Observ. /SLAC; Moderski, Rafal; Nalewajko, Krzysztof; /Warsaw, Copernicus Astron. Ctr.; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  5. Quasi Maximum Likelihood Analysis of High Dimensional Constrained Factor Models

    OpenAIRE

    Li, Kunpeng; Li,Qi; Lu, Lina

    2016-01-01

    Factor models have been widely used in practice. However, an undesirable feature of a high dimensional factor model is that the model has too many parameters. An effective way to address this issue, proposed in a seminar work by Tsai and Tsay (2010), is to decompose the loadings matrix by a high-dimensional known matrix multiplying with a low-dimensional unknown matrix, which Tsai and Tsay (2010) name constrained factor models. This paper investigates the estimation and inferential theory ...

  6. Bayesian model selection for constrained multivariate normal linear models

    NARCIS (Netherlands)

    Mulder, J.

    2010-01-01

    The expectations that researchers have about the structure in the data can often be formulated in terms of equality constraints and/or inequality constraints on the parameters in the model that is used. In a (M)AN(C)OVA model, researchers have expectations about the differences between the

  7. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  8. 3D facial geometric features for constrained local model

    NARCIS (Netherlands)

    Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Akshay; Pantic, Maja

    2014-01-01

    We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the fusi

  9. Robust discriminative response map fitting with constrained local models

    NARCIS (Netherlands)

    Asthana, Akshay; Zafeiriou, Stefanos; Cheng, Shiyang; Pantic, Maja

    2013-01-01

    We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, u

  10. Constraining the long-term climate reponse to stratospheric sulfate aerosols injection by the short-term volcanic climate response

    Science.gov (United States)

    Plazzotta, M.; Seferian, R.; Douville, H.; Kravitz, B.; Tilmes, S.; Tjiputra, J.

    2016-12-01

    Rising greenhouse gas emissions are leading to global warming and climate change, which will have multiple impacts on human society. Geoengineering methods like solar radiation management by stratospheric sulfate aerosols injection (SSA-SRM) aim at treating the symptoms of climate change by reducing the global temperature. Since a real-world testing cannot be implemented, Earth System Models (ESMs) are useful tools to assess the climate impacts of such geoengineering methods. However, coordinated simulations performed with the Geoengineering Model Intercomparison Project (GeoMIP) have shown that climate cooling in response to a continuous injection of 5Tg of SO2 per year under RCP45 future projection (the so-called G4 experiment) differs substantially between ESMs. Here, we employ a volcano analog approach to constrain the climate response in SSA-SRM geoengineering simulations across an ensemble of 10 ESMs. We identify an emergent relationship between the long-term cooling in responses to the mitigation of the clear-sky surface downwelling shortwave radiation (RSDSCS), and the short-term cooling related to the change in RSDSCS during the major tropical volcanic eruptions observed over the historical period (1850-2005). This relationship explains almost 80% of the multi-model spread. Combined with contemporary observations of the latest volcanic eruptions (satellite observations and model reanalyzes), this relationship provides a tight constraint on the climate impacts of SSA-SRM. We estimate that a continuous injection of SO2 aerosols into the stratosphere will reduce the global average temperature of continental land surface by 0.47 K per W m-2, impacting both hydrological and carbon cycles. Compared with the unconstrained ESMs ensemble (range from 0.32 to 0.92 K per W m-2 ), our estimate represents much higher confidence ways to assess the impacts of SSA-SRM on the climate while ruling the most extreme projections of the unconstrained ensemble extremely unlikely.

  11. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  12. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...

  13. A homogenized constrained mixture (and mechanical analog) model for growth and remodeling of soft tissue.

    Science.gov (United States)

    Cyron, C J; Aydin, R C; Humphrey, J D

    2016-12-01

    Most mathematical models of the growth and remodeling of load-bearing soft tissues are based on one of two major approaches: a kinematic theory that specifies an evolution equation for the stress-free configuration of the tissue as a whole or a constrained mixture theory that specifies rates of mass production and removal of individual constituents within stressed configurations. The former is popular because of its conceptual simplicity, but relies largely on heuristic definitions of growth; the latter is based on biologically motivated micromechanical models, but suffers from higher computational costs due to the need to track all past configurations. In this paper, we present a temporally homogenized constrained mixture model that combines advantages of both classical approaches, namely a biologically motivated micromechanical foundation, a simple computational implementation, and low computational cost. As illustrative examples, we show that this approach describes well both cell-mediated remodeling of tissue equivalents in vitro and the growth and remodeling of aneurysms in vivo. We also show that this homogenized constrained mixture model suggests an intimate relationship between models of growth and remodeling and viscoelasticity. That is, important aspects of tissue adaptation can be understood in terms of a simple mechanical analog model, a Maxwell fluid (i.e., spring and dashpot in series) in parallel with a "motor element" that represents cell-mediated mechanoregulation of extracellular matrix. This analogy allows a simple implementation of homogenized constrained mixture models within commercially available simulation codes by exploiting available models of viscoelasticity.

  14. Towards better constrained models of the solar magnetic cycle

    Science.gov (United States)

    Munoz-Jaramillo, Andres

    2010-12-01

    The best tools we have for understanding the origin of solar magnetic variability are kinematic dynamo models. During the last decade, this type of models has seen a continuous evolution and has become increasingly successful at reproducing solar cycle characteristics. The basic ingredients of these models are: the solar differential rotation -- which acts as the main source of energy for the system by shearing the magnetic field; the meridional circulation -- which plays a crucial role in magnetic field transport; the turbulent diffusivity -- which attempts to capture the effect of convective turbulence on the large scale magnetic field; and the poloidal field source -- which closes the cycle by regenerating the poloidal magnetic field. However, most of these ingredients remain poorly constrained which allows one to obtain solar-like solutions by "tuning" the input parameters, leading to controversy regarding which parameter set is more appropriate. In this thesis we revisit each of those ingredients in an attempt to constrain them better by using observational data and theoretical considerations, reducing the amount of free parameters in the model. For the meridional flow and differential rotation we use helioseismic data to constrain free parameters and find that the differential rotation is well determined, but the available data can only constrain the latitudinal dependence of the meridional flow. For the turbulent magnetic diffusivity we show that combining mixing-length theory estimates with magnetic quenching allows us to obtain viable magnetic cycles and that the commonly used diffusivity profiles can be understood as a spatiotemporal average of this process. For the poloidal source we introduce a more realistic way of modeling active region emergence and decay and find that this resolves existing discrepancies between kinematic dynamo models and surface flux transport simulations. We also study the physical mechanisms behind the unusually long minimum of

  15. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yu-Feng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained.From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively.Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation.The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity.Finally,we decompose the BK hierarchy of evolution equations into x-constrained flows and tn-constrained flows whose adjoint representations and the Lax pairs are given.

  16. Constraining interacting dark energy models with latest cosmological observations

    Science.gov (United States)

    Xia, Dong-Mei; Wang, Sai

    2016-11-01

    The local measurement of H0 is in tension with the prediction of Λ cold dark matter model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on cosmic microwave background, the baryon acoustic oscillation, large-scale structure, supernovae, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  17. Constraining interacting dark energy models with latest cosmological observations

    CERN Document Server

    Xia, Dong-Mei

    2016-01-01

    The local measurement of $H_0$ is in tension with the prediction of $\\Lambda$CDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, $H(z)$ and $H_0$ to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The $H_0$ tension can be moderately alleviated, but not totally released.

  18. Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

    CERN Document Server

    Yaghoobi, Mehrdad; Gribonval, Remi; Davies, Mike E

    2012-01-01

    We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical lear...

  19. The Distance Field Model and Distance Constrained MAP Adaptation Algorithm

    Institute of Scientific and Technical Information of China (English)

    YUPeng; WANGZuoying

    2003-01-01

    Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-

  20. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile user’s geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...

  1. Neuroticism and conscientiousness respectively constrain and facilitate short-term plasticity within the working memory neural network.

    Science.gov (United States)

    Dima, Danai; Friston, Karl J; Stephan, Klaas E; Frangou, Sophia

    2015-10-01

    Individual differences in cognitive efficiency, particularly in relation to working memory (WM), have been associated both with personality dimensions that reflect enduring regularities in brain configuration, and with short-term neural plasticity, that reflects task-related changes in brain connectivity. To elucidate the relationship of these two divergent mechanisms, we tested the hypothesis that personality dimensions, which reflect enduring aspects of brain configuration, inform about the neurobiological framework within which short-term, task-related plasticity, as measured by effective connectivity, can be facilitated or constrained. As WM consistently engages the dorsolateral prefrontal (DLPFC), parietal (PAR), and anterior cingulate cortex (ACC), we specified a WM network model with bidirectional, ipsilateral, and contralateral connections between these regions from a functional magnetic resonance imaging dataset obtained from 40 healthy adults while performing the 3-back WM task. Task-related effective connectivity changes within this network were estimated using Dynamic Causal Modelling. Personality was evaluated along the major dimensions of Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. Only two dimensions were relevant to task-dependent effective connectivity. Neuroticism and Conscientiousness respectively constrained and facilitated neuroplastic responses within the WM network. These results suggest individual differences in cognitive efficiency arise from the interplay between enduring and short-term plasticity in brain configuration.

  2. A spatially constrained generative model and an EM algorithm for image segmentation.

    Science.gov (United States)

    Diplaros, Aristeidis; Vlassis, Nikos; Gevers, Theo

    2007-05-01

    In this paper, we present a novel spatially constrained generative model and an expectation-maximization (EM) algorithm for model-based image segmentation. The generative model assumes that the unobserved class labels of neighboring pixels in the image are generated by prior distributions with similar parameters, where similarity is defined by entropic quantities relating to the neighboring priors. In order to estimate model parameters from observations, we derive a spatially constrained EM algorithm that iteratively maximizes a lower bound on the data log-likelihood, where the penalty term is data-dependent. Our algorithm is very easy to implement and is similar to the standard EM algorithm for Gaussian mixtures with the main difference that the labels posteriors are "smoothed" over pixels between each E- and M-step by a standard image filter. Experiments on synthetic and real images show that our algorithm achieves competitive segmentation results compared to other Markov-based methods, and is in general faster.

  3. Constraining Galactic Magnetic Field Models with Starlight Polarimetry

    CERN Document Server

    Pavel, Michael D

    2011-01-01

    This paper provides testable predictions about starlight polarizations to constrain the geometry of the Galactic magnetic field, in particular the nature of the poloidal component. Galactic dynamo simulations and Galactic dust distributions from the literature are combined with a Stokes radiative transfer model to predict the observed polarizations and position angles of near-infrared starlight, assuming the light is polarized by aligned anisotropic dust grains. S0 and A0 magnetic field models and the role of magnetic pitch angle are all examined. All-sky predictions are made, and particular directions are identified as providing diagnostic power for discriminating among the models. Cumulative distribution functions of the normalized degree of polarization and plots of polarization position angle vs. Galactic latitude are proposed as tools for testing models against observations.

  4. Constrained motion model of mobile robots and its applications.

    Science.gov (United States)

    Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong

    2009-06-01

    Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.

  5. Using Coronal Hole Maps to Constrain MHD Models

    Science.gov (United States)

    Caplan, Ronald M.; Downs, Cooper; Linker, Jon A.; Mikic, Zoran

    2017-08-01

    In this presentation, we explore the use of coronal hole maps (CHMs) as a constraint for thermodynamic MHD models of the solar corona. Using our EUV2CHM software suite (predsci.com/chd), we construct CHMs from SDO/AIA 193Å and STEREO-A/EUVI 195Å images for multiple Carrington rotations leading up to the August 21st, 2017 total solar eclipse. We then contruct synoptic CHMs from synthetic EUV images generated from global thermodynamic MHD simulations of the corona for each rotation. Comparisons of apparent coronal hole boundaries and estimates of the net open flux are used to benchmark and constrain our MHD model leading up to the eclipse. Specifically, the comparisons are used to find optimal parameterizations of our wave turbulence dissipation (WTD) coronal heating model.

  6. Constrained model predictive control, state estimation and coordination

    Science.gov (United States)

    Yan, Jun

    In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to

  7. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    Science.gov (United States)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  8. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  9. Modelling Thermal Emission to Constrain Io's Largest Eruptions

    Science.gov (United States)

    Davies, A. G.; De Pater, I.; de Kleer, K.; Head, J. W., III; Wilson, L.

    2016-12-01

    Massive, voluminous, low-silica content basalt lava flows played a major role in shaping the surfaces of the terrestrial planets and the Moon [1] but the mechanisms of eruption, including effusion rate profiles and flow regime, are often obscure. However, eruptions of large volumes of lava and the emplacement of thick, areally extensive silicate lava flows are extant on the volcanic jovian moon Io [2], thus providing a template for understanding how these processes behaved elsewhere in the Solar System. We have modelled data of the largest of these eruptions to constrain eruption processes from the evolution of the wavelength variation of the resulting thermal emission [3]. We continue to refine our models to further constrain eruption parameters. We focus on large "outburst" eruptions, large lava fountains which feed lava flows [4] which have been directly observed on Io from the Galileo spacecraft [5, 6]. Outburst data continue to be collected by large ground-based telescopes [7, 8]. These data have been fitted with a sophisticated thermal emission model to derive eruption parameters such as areal coverage and effusion rates. We have created a number of tools for investigating and constraining effusion rate for Io's largest eruptions. It remains for all of the components to be integrated into a single model with rheological properties dependent on flow regime and the effects of heat loss. The crucial advance on previous estimates of lava flow emplacement on Io [e.g., 5] is that, by keeping track of the temperature distribution on the surface of the lava flows (a function of flow regime and varying effusion rate) the integrated thermal emission spectrum can be synthesized. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. We thank the NASA OPR Program (NNN13D466T) and NSF (Grant AST-1313485) for supports. Refs: [1] Wilson, L. and J. W. Head (2016), Icarus, doi:10.1016/j.icarus.2015.12.039. [2

  10. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  11. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile user’s geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...... representation. These capture aspects of the problem domain that are required in order to support the querying that underlies the envisioned location-based services....

  12. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  13. Constraining a halo model for cosmological neutral hydrogen

    CERN Document Server

    Padmanabhan, Hamsa

    2016-01-01

    We describe a combined halo model to constrain the distribution of neutral hydrogen (HI) in the post-reionization universe. We combine constraints from the various probes of HI at different redshifts: the low-redshift 21-cm emission line surveys, intensity mapping experiments at intermediate redshifts, and the Damped Lyman-Alpha (DLA) observations at higher redshifts. We use a Markov Chain Monte Carlo (MCMC) approach to combine the observations and place constraints on the free parameters in the model. Our best-fit model involves a relation between neutral hydrogen mass $M_{\\rm HI}$ and halo mass $M$ with a non-unit slope, and an upper and a lower cutoff. We find that the model fits all the observables but leads to an underprediction of the bias parameter of DLAs at $z \\sim 2.3$. We also find indications of a possible tension between the HI column density distribution and the mass function of HI-selected galaxies at $z\\sim 0$. We provide the central values of the parameters of the best-fit model so derived. W...

  14. Data Constrained Coronal Mass Ejections in A Global Magnetohydrodynamics Model

    CERN Document Server

    Jin, M; van der Holst, B; Sokolov, I; Toth, G; Mullinix, R E; Taktakishvili, A; Chulaki, A; Gombosi, T I

    2016-01-01

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO/LASCO, and STEREO/COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator Gibson-Low (EEGGL), we present a method to derive Gibson-Low (GL) flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms are shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, wh...

  15. Constraining a halo model for cosmological neutral hydrogen

    Science.gov (United States)

    Padmanabhan, Hamsa; Refregier, Alexandre

    2017-02-01

    We describe a combined halo model to constrain the distribution of neutral hydrogen (H I) in the post-reionization universe. We combine constraints from the various probes of H I at different redshifts: the low-redshift 21-cm emission line surveys, intensity mapping experiments at intermediate redshifts, and the Damped Lyman-Alpha (DLA) observations at higher redshifts. We use a Markov Chain Monte Carlo approach to combine the observations and place constraints on the free parameters in the model. Our best-fitting model involves a relation between neutral hydrogen mass M_{H I} and halo mass M with a non-unit slope, and an upper and a lower cutoff. We find that the model fits all the observables but leads to an underprediction of the bias parameter of DLAs at z ˜ 2.3. We also find indications of a possible tension between the H I column density distribution and the mass function of H I-selected galaxies at z ˜ 0. We provide the central values of the parameters of the best-fitting model so derived. We also provide a fitting form for the derived evolution of the concentration parameter of H I in dark matter haloes, and discuss the implications for the redshift evolution of the H I-halo mass relation.

  16. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  17. Constraining Water Quality Models With Electrical Resistivity Tomography (ERT)

    Science.gov (United States)

    Bentley, L. R.; Gharibi, M.; Mrklas, O.; Lunn, S. D.

    2001-12-01

    Water quality models are difficult to constrain with piezometer data alone because the data are spatially sparse. Since the electrical conductivity (EC) of water is often correlated with water quality, geophysical measurements of electrical conductivity may provide densely sampled secondary data for constraining water quality models. We present a quantitative interpretation protocol for interpreting EC derived from surface ERT results. A standard temperature is selected that is in the range of the in situ field temperatures, and laboratory measurements establish a functional relationship between water EC and temperature. Total meq/l of charge are often strongly correlated with water EC at the standard temperature. Laboratory data is used to develop a correlation model between indicator parameters or water chemistry evolution and total meq/l of charge. Since the solid phase may contain a conductive clay fraction, a site specific calibrated Waxman-Smits rock physics model is used to estimate groundwater EC from bulk EC derived from ERT inversions. The groundwater EC at in situ temperature is converted to EC at the standard temperature, and the total meq/l is estimated using the laboratory-established correlation. The estimated meq/l can be used as soft information to map distribution of water quality or to estimate changes to water chemistry with time. We apply the analysis to a decommissioned sour gas plant undergoing remediation. Background bulk EC is high (50 to 100 mS/m) due to the clay content of tills. The highest values of groundwater EC are mainly due to acetic acid, which is a degradation product of amines and glycols. Acetic acid degrades readily under aerobic conditions, lowering the EC of pore waters. The calibrated Waxman-Smits model predicts that a reduction of groundwater EC from 1600 mS/m to 800mS/m will result in a reduction of bulk EC from 150 mS/m to 110 mS/m. Groundwater EC values both increase and decrease with time due to site heterogeneity, and

  18. Future sea level rise constrained by observations and long-term commitment.

    Science.gov (United States)

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-08

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections.

  19. Multi-asset Black-Scholes model as a variable second class constrained dynamical system

    Science.gov (United States)

    Bustamante, M.; Contreras, M.

    2016-09-01

    In this paper, we study the multi-asset Black-Scholes model from a structural point of view. For this, we interpret the multi-asset Black-Scholes equation as a multidimensional Schrödinger one particle equation. The analysis of the classical Hamiltonian and Lagrangian mechanics associated with this quantum model implies that, in this system, the canonical momentums cannot always be written in terms of the velocities. This feature is a typical characteristic of the constrained system that appears in the high-energy physics. To study this model in the proper form, one must apply Dirac's method for constrained systems. The results of the Dirac's analysis indicate that in the correlation parameters space of the multi-assets model, there exists a surface (called the Kummer surface ΣK, where the determinant of the correlation matrix is null) on which the constraint number can vary. We study in detail the cases with N = 2 and N = 3 assets. For these cases, we calculate the propagator of the multi-asset Black-Scholes equation and show that inside the Kummer ΣK surface the propagator is well defined, but outside ΣK the propagator diverges and the option price is not well defined. On ΣK the propagator is obtained as a constrained path integral and their form depends on which region of the Kummer surface the correlation parameters lie. Thus, the multi-asset Black-Scholes model is an example of a variable constrained dynamical system, and it is a new and beautiful property that had not been previously observed.

  20. Siberian Arctic black carbon sources constrained by model and observation

    Science.gov (United States)

    Winiger, Patrik; Andersson, August; Eckhardt, Sabine; Stohl, Andreas; Semiletov, Igor P.; Dudarev, Oleg V.; Charkin, Alexander; Shakhova, Natalia; Klimont, Zbigniew; Heyes, Chris; Gustafsson, Örjan

    2017-02-01

    Black carbon (BC) in haze and deposited on snow and ice can have strong effects on the radiative balance of the Arctic. There is a geographic bias in Arctic BC studies toward the Atlantic sector, with lack of observational constraints for the extensive Russian Siberian Arctic, spanning nearly half of the circum-Arctic. Here, 2 y of observations at Tiksi (East Siberian Arctic) establish a strong seasonality in both BC concentrations (8 ngṡm-3 to 302 ngṡm-3) and dual-isotope-constrained sources (19 to 73% contribution from biomass burning). Comparisons between observations and a dispersion model, coupled to an anthropogenic emissions inventory and a fire emissions inventory, give mixed results. In the European Arctic, this model has proven to simulate BC concentrations and source contributions well. However, the model is less successful in reproducing BC concentrations and sources for the Russian Arctic. Using a Bayesian approach, we show that, in contrast to earlier studies, contributions from gas flaring (6%), power plants (9%), and open fires (12%) are relatively small, with the major sources instead being domestic (35%) and transport (38%). The observation-based evaluation of reported emissions identifies errors in spatial allocation of BC sources in the inventory and highlights the importance of improving emission distribution and source attribution, to develop reliable mitigation strategies for efficient reduction of BC impact on the Russian Arctic, one of the fastest-warming regions on Earth.

  1. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving...

  2. Dark matter in a constrained E 6 inspired SUSY model

    Science.gov (United States)

    Athron, P.; Harries, D.; Nevzorov, R.; Williams, A. G.

    2016-12-01

    We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.

  3. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  4. Stochastic volatility models at ρ=±1 as second class constrained Hamiltonian systems

    Science.gov (United States)

    Contreras G., Mauricio

    2014-07-01

    The stochastic volatility models used in the financial world are characterized, in the continuous-time case, by a set of two coupled stochastic differential equations for the underlying asset price S and volatility σ. In addition, the correlations of the two Brownian movements that drive the stochastic dynamics are measured by the correlation parameter ρ (-1≤ρ≤1). This stochastic system is equivalent to the Fokker-Planck equation for the transition probability density of the random variables S and σ. Solutions for the transition probability density of the Heston stochastic volatility model (Heston, 1993) were explored in Dragulescu and Yakovenko (2002), where the fundamental quantities such as the transition density itself, depend on ρ in such a manner that these are divergent for the extreme limit ρ=±1. The same divergent behavior appears in Hagan et al. (2002), where the probability density of the SABR model was analyzed. In an option pricing context, the propagator of the bi-dimensional Black-Scholes equation was obtained in Lemmens et al. (2008) in terms of the path integrals, and in this case, the propagator diverges again for the extreme values ρ=±1. This paper shows that these similar divergent behaviors are due to a universal property of the stochastic volatility models in the continuum: all of them are second class constrained systems for the most extreme correlated limit ρ=±1. In this way, the stochastic dynamics of the ρ=±1 cases are different of the -1mechanics of the quantum model, implies that stochastic volatility models at ρ=±1 correspond to a constrained system. To study the dynamics in an appropriate form, Dirac's method for constrained systems (Dirac, 1958, 1967) must be employed, and Dirac's analysis reveals that the constraints are second class. In order to obtain the transition probability density or the option price correctly, one must evaluate the propagator as a constrained Hamiltonian path-integral (Henneaux and

  5. Proton currents constrain structural models of voltage sensor activation

    Science.gov (United States)

    Randolph, Aaron L; Mokrab, Younes; Bennett, Ashley L; Sansom, Mark SP; Ramsey, Ian Scott

    2016-01-01

    The Hv1 proton channel is evidently unique among voltage sensor domain proteins in mediating an intrinsic ‘aqueous’ H+ conductance (GAQ). Mutation of a highly conserved ‘gating charge’ residue in the S4 helix (R1H) confers a resting-state H+ ‘shuttle’ conductance (GSH) in VGCs and Ci VSP, and we now report that R1H is sufficient to reconstitute GSH in Hv1 without abrogating GAQ. Second-site mutations in S3 (D185A/H) and S4 (N4R) experimentally separate GSH and GAQ gating, which report thermodynamically distinct initial and final steps, respectively, in the Hv1 activation pathway. The effects of Hv1 mutations on GSH and GAQ are used to constrain the positions of key side chains in resting- and activated-state VS model structures, providing new insights into the structural basis of VS activation and H+ transfer mechanisms in Hv1. DOI: http://dx.doi.org/10.7554/eLife.18017.001 PMID:27572256

  6. Dark Matter in a Constrained $E_6$ Inspired SUSY Model

    CERN Document Server

    Athron, P; Nevzorov, R; Williams, A G

    2016-01-01

    We investigate dark matter in a constrained $E_6$ inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of $E_6$ leads to an additional $U(1)_N$ symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volu...

  7. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  8. The affine constrained GNSS attitude model and its multivariate integer least-squares solution

    NARCIS (Netherlands)

    Teunissen, P.J.G.

    2012-01-01

    A new global navigation satellite system (GNSS) carrier-phase attitude model and its solution are introduced in this contribution. This affine-constrained GNSS attitude model has the advantage that it avoids the computational complexity of the orthonormality-constrained GNSS attitude model, while it

  9. Constraining hybrid inflation models with WMAP three-year results

    CERN Document Server

    Cardoso, A

    2006-01-01

    We reconsider the original model of quadratic hybrid inflation in light of the WMAP three-year results and study the possibility of obtaining a spectral index of primordial density perturbations, $n_s$, smaller than one from this model. The original hybrid inflation model naturally predicts $n_s\\geq1$ in the false vacuum dominated regime but it is also possible to have $n_s<1$ when the quadratic term dominates. We therefore investigate whether there is also an intermediate regime compatible with the latest constraints, where the scalar field value during the last 50 e-folds of inflation is less than the Planck scale.

  10. BIEMS : A Fortran 90 Program for Calculating Bayes Factors for Inequality and Equality Constrained Models

    Directory of Open Access Journals (Sweden)

    Joris Mulder

    2012-01-01

    Full Text Available This paper discusses a Fortran 90 program referred to asBIEMS (Bayesian inequality and equality constrained model selection that can be used for calculating Bayes factors of multivariate normal linear models with equality and/or inequality constraints betweenthe model parameters versus a model containing no constraints, which is referred to as the unconstrained model. The prior that is used under the unconstrained model is the conjugate expected-constrained posterior prior and the prior under the constrained model is proportional to the unconstrained prior truncated in the constrained space. This results in Bayes factors that appropriately balance between model t and complexity for a broad class of constrained models. When the set of equality and/or inequality constraints in the model represents a hypothesis that applied researchers have in, for instance, (MAN(COVA, (multivariate regression, or repeated measurements, the obtained Bayes factor can be used to determine how much evidence is provided by the data in favor of the hypothesis in comparison to the unconstrained model. If several hypotheses are underinvestigation, the Bayes factors between the constrained models can be calculated using the obtained Bayes factors from BIEMS. Furthermore, posterior model probabilities of constrained models are provided which allows the user to compare the models directlywith each other.

  11. Inference with Constrained Hidden Markov Models in PRISM

    CERN Document Server

    Christiansen, Henning; Lassen, Ole Torp; Petit, Matthieu

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. Defining HMMs with side-constraints in Constraint Logic Programming have advantages in terms of more compact expression and pruning opportunities during inference. We present a PRISM-based framework for extending HMMs with side-constraints and show how well-known constraints such as cardinality and all different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.

  12. The dynamics of shifting cultivation captured in an extended Constrained Cellular Automata land use model

    NARCIS (Netherlands)

    Wickramasuriya, R.C.; Bregt, A.K.; Delden, van H.; Hagen-Zanker, A.

    2009-01-01

    This paper presents an extension to the Constrained Cellular Automata (CCA) land use model of White et al. [White, R., Engelen, G., Uljee, I., 1997. The use of constrained cellular automata for high-resolution modelling of urban land-use dynamics. Environment and Planning B: Planning and Design

  13. Constraining RS Models by Future Flavor and Collider Measurements: A Snowmass Whitepaper

    Energy Technology Data Exchange (ETDEWEB)

    Agashe, Kaustubh [Maryland U.; Bauer, Martin [Chicago U., EFI; Goertz, Florian [Zurich, ETH; Lee, Seung J. [Korea Inst. Advanced Study, Seoul; Vecchi, Luca [Maryland U.; Wang, Lian-Tao [Chicago U., EFI; Yu, Felix [Fermilab

    2013-10-03

    Randall-Sundrum models are models of quark flavor, because they explain the hierarchies in the quark masses and mixings in terms of order one localization parameters of extra dimensional wavefunctions. The same small numbers which generate the light quark masses suppress contributions to flavor violating tree level amplitudes. In this note we update universal constraints from electroweak precision parameters and demonstrate how future measurements of flavor violation in ultra rare decay channels of Kaons and B mesons will constrain the parameter space of this type of models. We show how collider signatures are correlated with these flavor measurements and compute projected limits for direct searches at the 14 TeV LHC run, a 14 TeV LHC luminosity upgrade, a 33 TeV LHC energy upgrade, and a potential 100 TeV machine. We further discuss the effects of a warped model of leptons in future measurements of lepton flavor violation.

  14. BIEMS: A Fortran 90 Program for Calculating Bayes Factors for Inequality and Equality Constrained Models

    OpenAIRE

    Joris Mulder; Herbert Hoijtink; Christiaan de Leeuw

    2012-01-01

    This paper discusses a Fortran 90 program referred to as BIEMS (Bayesian inequality and equality constrained model selection) that can be used for calculating Bayes factors of multivariate normal linear models with equality and/or inequality constraints between the model parameters versus a model containing no constraints, which is referred to as the unconstrained model. The prior that is used under the unconstrained model is the conjugate expected-constrained posterior prior and the prior un...

  15. The long-term outcome of 755 consecutive constrained acetabular components in total hip arthroplasty examining the successes and failures.

    Science.gov (United States)

    Berend, Keith R; Lombardi, Adolph V; Mallory, Thomas H; Adams, Joanne B; Russell, Jackie H; Groseth, Kari L

    2005-10-01

    Constrained acetabular components can treat or prevent instability after total hip arthroplasty (THA). We examine long-term results of 755 consecutive constrained THA in 720 patients (1986-1993; 62 primary, 59 conversion, 565 revision, 60 reimplantation, and 9 total femur). Eighty-three patients (88 THAs) were lost before 10-year follow-up, leaving 639 patients (667 THAs) available for study. Dislocation occurred in 117 hips (17.5%), in 37 (28.9%) of 128 constrained for recurrent dislocation, and 46 (28.2%) of 163 with dislocation history. Other reoperations were for aseptic loosening (51, 7.6% acetabular; 28, 4.2% stem; 16, 2.4% combined), infection (40, 6.0%), periprosthetic fracture (19, 2.8%), stem breakage (2, 0.3%), cup malposition (1, 0.1%), dissociated insert (1, 0.1%), dissociated femoral head (1, 0.1%), and impingement of 1 broken (0.1%) and 4 (0.6%) dissociated constraining rings. Although constrained acetabular components prevented recurrent dislocation in 71.1%, they should be used cautiously, with a 42.1% long-term failure rate observed in this series. Dislocation was common despite constraint with previous history as a significant risk.

  16. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    CERN Document Server

    Chen, Ximing; Pan, Nana; Gong, Yungui

    2010-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  17. Constraining Source Terms, Regional Attenuation Models, and Site Effects (Postprint)

    Science.gov (United States)

    2012-03-22

    Q = 601*f^0.31 (WSP) Q = 599*f^0.30 ( MDF ) Q0 γ Q0 γ Q0 2011 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 62...100˚ 100˚ 110˚ 110˚ 30˚ 30˚ 40˚ 40˚ 50˚ 50˚ MDF AAKINAML CHMEKS2 ENH BK KMI KURKIZ KZA LSA MAKZUZ PDGT M2TLGUCHULHL ULN USP WMQ WUS XAN18338 70˚ 70˚ 80...XAN18338 150 200 250 300 350 400 450 Lg Q(1 Hz) γ γ 100 200 300 400 500 Q 0 (W S P ) 100 200 300 400 500 Q0 ( MDF ) AAKIA KNAML ENH KB KURKIZ KZA LSA MAKZZ

  18. Parameter sensitivity in satellite-gravity-constrained geothermal modelling

    Science.gov (United States)

    Pastorutti, Alberto; Braitenberg, Carla

    2017-04-01

    The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.

  19. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  20. The added value of remote sensing products in constraining hydrological models

    Science.gov (United States)

    Hrachowitz, M.; Nijzink, R.; Savenije, H. H. G.

    2016-12-01

    A typical calibration of a hydrological model relies on the availability of discharge data, which is, however, not always present and not the largest outgoing flux in many parts of the world. At the same time, more remote sensing products are becoming available that can aid in deriving model parameters and model structures, but also more traditional analytical approaches (e.g. the Budyko framework) can still be of high value. In this research, models are constrained in a step-wise approach with different combinations of remote sensing data and/or analytical frameworks. For example, the temporal resolution can be a driving principle leading to the formulation of a set of constraints. More specific, in a first step the Budyko framework can be used as a means to filter out solutions that cannot reproduce the long-term dynamics of the system. In the following steps, remote sensing data of respectively GRACE (monthly resolution), NDII (16-day resolution) and LSA-SAF evaporation (daily) can lead to final parameterizations of a model. Nevertheless, the choice of these driving principles, the applied order of constraints and the strictness of the applied boundaries of the constraints, will lead to varying solutions. Therefore, variations in these factors, and thus different combinations with different remote sensing products, should lead to an enhanced understanding of the strengths and weaknesses the approaches have with regard to finding optimal parameter sets for hydrological models.

  1. Top ten models constrained by b {yields} s{gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, J.L.

    1994-05-01

    The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found the parameters in some of the models.

  2. Increasing secondary and renewable material use: a chance constrained modeling approach to manage feedstock quality variation.

    Science.gov (United States)

    Olivetti, Elsa A; Gaustad, Gabrielle G; Field, Frank R; Kirchain, Randolph E

    2011-05-01

    The increased use of secondary (i.e., recycled) and renewable resources will likely be key toward achieving sustainable materials use. Unfortunately, these strategies share a common barrier to economical implementation - increased quality variation compared to their primary and synthetic counterparts. Current deterministic process-planning models overestimate the economic impact of this increased variation. This paper shows that for a range of industries from biomaterials to inorganics, managing variation through a chance-constrained (CC) model enables increased use of such variable raw materials, or heterogeneous feedstocks (hF), over conventional, deterministic models. An abstract, analytical model and a quantitative model applied to an industrial case of aluminum recycling were used to explore the limits and benefits of the CC formulation. The results indicate that the CC solution can reduce cost and increase potential hF use across a broad range of production conditions through raw materials diversification. These benefits increase where the hFs exhibit mean quality performance close to that of the more homogeneous feedstocks (often the primary and synthetic materials) or have large quality variability. In terms of operational context, the relative performance grows as intolerance for batch error increases and as the opportunity to diversify the raw material portfolio increases.

  3. A discrete model for geometrically nonlinear transverse free constrained vibrations of beams with various end conditions

    Science.gov (United States)

    Rahmouni, A.; Beidouri, Z.; Benamar, R.

    2013-09-01

    previously developed models of geometrically nonlinear vibrations of Euler-Bernoulli continuous beams, and multidof system models made of N masses placed at the end of elastic bars connected by linear spiral springs, presenting the beam flexural rigidity. The validation of the new model via the analysis of the convergence conditions of the nonlinear frequencies obtained by the N-dof system, when N increases, and those obtained in previous works using a continuous description of the beam. In addition to the above points, the models developed in the present work, may constitute, in our opinion, a good illustration, from the didactic point of view, of the origin of the geometrical nonlinearity induced by large transverse vibration amplitudes of constrained continuous beams, which may appear as a Pythagorean Theorem effect. The first step of the work presented here was the formulation of the problem of nonlinear vibrations of the discrete system shown in Fig. 1 in terms of the semi-analytical method, denoted as SAA, developed in the early 90's by Benamar and coauthors [3], and discussed for example in [6,7]. This method has been applied successfully to various types of geometrically nonlinear problems of structural dynamics [1-3,6-8,10-12] and the objective here was to use it in order to develop a flexible discrete nonlinear model which may be useful for presenting in further works geometrically nonlinear vibrations of real beams with discontinuities in the mass, the section, or the stiffness distributions. The purpose in the present work was restricted to developing and validating the model, via comparison of the obtained dependence of the resonance frequencies of such a system on the amplitude of vibration, with the results obtained previously by continuous beams nonlinear models. In the SAA method, the dynamic system under consideration is described by the mass matrix [M], the rigidity matrix [K], and the nonlinear rigidity matrix [B], which depends on the amplitude of

  4. Are 't Hooft indices constrained in preon models with complementarity\\?

    Science.gov (United States)

    Okamoto, Yuko

    1989-03-01

    We present a counterexample to the conjecture that the 't Hooft indices for composite models satisfying complementarity are bounded in magnitude by 1. The model is based on the metacolor group SU(9)MC with two preons in the representation 36 and two preons in the representation 126¯. We obtain the 't Hooft index 12 for this model.

  5. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  6. A brief survey of constrained mechanics and variational problems in terms of differential forms

    Science.gov (United States)

    Hermann, Robert

    1994-01-01

    There has been considerable interest recently in constrained mechanics and variational problems. This is in part due to applied interests (such as 'non-holonomic mechanics in robotics') and in other part due to the fact that several schools of 'pure' mathematics have found that this classical subject is of importance for what they are trying to do. I have made various attempts at developing these subjects since my Lincoln lab days of the late 1950's. In this Chapter, I will sketch a Unified point of view, using Cartan's approach with differential forms. This has the advantage from the C-O-R viewpoint being developed in this Volume that the extension from 'smooth' to 'generalized' data is very systematic and algebraic. (I will only deal with the 'smooth' point of view in this Chapter; I will develop the 'generalized function' material at a later point.) The material presented briefly here about Variational Calculus and Constrained Mechanics can be found in more detail in my books, 'Differential Geometry and the Calculus of Variations', 'Lie Algebras and Quantum Mechanics', and 'Geometry, Physics and Systems'.

  7. Objective Bayesian Comparison of Constrained Analysis of Variance Models.

    Science.gov (United States)

    Consonni, Guido; Paroli, Roberta

    2016-10-04

    In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.

  8. A note on constrained M-estimation and its recursive analog in multivariate linear regression models

    Institute of Scientific and Technical Information of China (English)

    RAO; Calyampudi; R

    2009-01-01

    In this paper,the constrained M-estimation of the regression coeffcients and scatter parameters in a general multivariate linear regression model is considered.Since the constrained M-estimation is not easy to compute,an up-dating recursion procedure is proposed to simplify the com-putation of the estimators when a new observation is obtained.We show that,under mild conditions,the recursion estimates are strongly consistent.In addition,the asymptotic normality of the recursive constrained M-estimators of regression coeffcients is established.A Monte Carlo simulation study of the recursion estimates is also provided.Besides,robustness and asymptotic behavior of constrained M-estimators are briefly discussed.

  9. Constraining Inflationary Dark Matter in the Luminogenesis Model

    CERN Document Server

    Hung, Pham Q

    2014-01-01

    Using renormalization-group flow and cosmological constraints on inflation models, we exploit a unique connection between cosmological inflation and the dynamical mass of dark-matter particles in the luminogenesis model, a unification model with the gauge group $SU(3)_C \\times SU(6) \\times U(1)_Y$, which breaks to the Standard Model with an extra gauge group for dark matter when the inflaton rolls into the true vacuum. In this model, Inflaton decay gives rise to dark matter, which in turn decays to luminous matter in the right proportion that agrees with cosmological data. Some attractive features of this model include self-interacting dark matter, which may resolve the problems of dwarf-galaxy structures and dark-matter cusps at the centers of galaxies.

  10. Constraining inflationary dark matter in the luminogenesis model

    Energy Technology Data Exchange (ETDEWEB)

    Hung, Pham Q.; Ludwick, Kevin J. [Department of Physics, University of Virginia,Charlottesville, VA, 22904-4714 (United States); Center for Theoretical and Computational Physics, Hue University College of Education,34 Le Loi Street, Hue (Viet Nam)

    2015-09-09

    Using renormalization-group flow and cosmological constraints on inflation models, we exploit a unique connection between cosmological inflation and the dynamical mass of dark matter particles in the luminogenesis model, a unification model with the gauge group SU(3){sub C}×SU(6)×U(1){sub Y}, which breaks to the Standard Model with an extra gauge group for dark matter when the inflaton rolls into the true vacuum. In this model, inflaton decay gives rise to dark matter, which in turn decays to luminous matter in the right proportion that agrees with cosmological data. Some attractive features of this model include self-interacting dark matter, which may resolve the problems of dwarf galaxy structures and dark matter cusps at the centers of galaxies.

  11. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  12. Observational techniques for constraining hydraulic and hydrologic models for use in catchment scale flood impact assessment

    Science.gov (United States)

    Owen, Gareth; Wilkinson, Mark; Nicholson, Alex; Quinn, Paul; O'Donnell, Greg

    2015-04-01

    river stem and principal tributaries, it is possible to understand in detail how floods develop and propagate, both temporally and spatially. Traditional rainfall-runoff modelling involves the calibration of model parameters to achieve a best fit against an observed flow series, typically at a single location. The modelling approach adopted here is novel in that it directly uses the nested observed information to disaggregate the outlet hydrograph in terms of the source locations. Using a combination of local evidence and expert opinion, the model can be used to assess the impacts of distributed land use management changes and NFM on floods. These studies demonstrate the power of networks of observational instrumentation for constraining hydraulic and hydrologic models for use in prediction.

  13. Reservoir Stochastic Modeling Constrained by Quantitative Geological Conceptual Patterns

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper discusses the principles of geologic constraints on reservoir stochastic modeling. By using the system science theory, two kinds of uncertainties, including random uncertainty and fuzzy uncertainty, are recognized. In order to improve the precision of stochastic modeling and reduce the uncertainty in realization, the fuzzy uncertainty should be stressed, and the "geological genesis-controlled modeling" is conducted under the guidance of a quantitative geological pattern. An example of the Pingqiao horizontal-well division of the Ansai Oilfield in the Ordos Basin is taken to expound the method of stochastic modeling.

  14. On Modeling and Constrained Model Predictive Control of Open Irrigation Canals

    Directory of Open Access Journals (Sweden)

    Lihui Cen

    2017-01-01

    Full Text Available This paper proposes a model predictive control of open irrigation canals with constraints. The Saint-Venant equations are widely used in hydraulics to model an open canal. As a set of hyperbolic partial differential equations, they are not solved explicitly and difficult to design optimal control algorithms. In this work, a prediction model of an open canal is developed by discretizing the Saint-Venant equations in both space and time. Based on the prediction model, a constrained model predictive control was firstly investigated for the case of one single-pool canal and then generalized to the case of a cascaded canal with multipools. The hydraulic software SICC was used to simulate the canal and test the algorithms with application to a real-world irrigation canal of Yehe irrigation area located in Hebei province.

  15. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De...

  16. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei;

    2015-01-01

    . Furthermore, the model is validated using densification results from sintering of bi-layered tubular ceramic oxygen membrane based on porous MgO and Ce0.9Gd0.1O1.95-d layers. Model input parameters, such as the shrinkage kinetics and viscous parameters are obtained experimentally using optical dilatometry...

  17. A new model for solution of complex distributed constrained problems

    CERN Document Server

    Al-Maqtari, Sami; Babkin, Eduard

    2010-01-01

    In this paper we describe an original computational model for solving different types of Distributed Constraint Satisfaction Problems (DCSP). The proposed model is called Controller-Agents for Constraints Solving (CACS). This model is intended to be used which is an emerged field from the integration between two paradigms of different nature: Multi-Agent Systems (MAS) and the Constraint Satisfaction Problem paradigm (CSP) where all constraints are treated in central manner as a black-box. This model allows grouping constraints to form a subset that will be treated together as a local problem inside the controller. Using this model allows also handling non-binary constraints easily and directly so that no translating of constraints into binary ones is needed. This paper presents the implementation outlines of a prototype of DCSP solver, its usage methodology and overview of the CACS application for timetabling problems.

  18. Sands modeling constrained by high-resolution seismic data

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the phase of field evaluation, the changing ofinterwell reservoir may be out of control ifthe geological model was built only on well data due to few existing wells. The uncertainty of the interwell reservoir interpolation based only on well data can be decreased by comprehensive utilization of geological, logging and seismic data, especially by using highly relative seismic properties from 3D seismic data adjusted by well point data to restrict interpolation of geological properties. A 3D-geological model which takes the sand body as the direct modeling object was built through stacking the structure, reservoir and water/oil/gas properties together in 3D space.

  19. Modelling, Transformations, and Scaling Decisions in Constrained Optimization Problems

    Science.gov (United States)

    1976-03-01

    applied nonlinear programming /2§/ , and are given in Appendix A of this thesis along with the original source. They will be referred to as Himmelblau ...j (19) 44 Therefore z=v . In x. can be replaced in the formulation by: 3 i z=Y i "Yj (20) Himmelblau problem 16 has numerous cross product terms in...optimum point, and thus are not recommended. 69 APPENDIX A Test Problems Used with GRG and SUMT Codes A. HIMMELBLAU PROBLEM 16 Source: J.D. Pearson

  20. A Sequence of Relaxations Constraining Hidden Variable Models

    CERN Document Server

    Steeg, Greg Ver

    2011-01-01

    Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.

  1. A marked correlation function for constraining modified gravity models

    CERN Document Server

    White, Martin

    2016-01-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbati...

  2. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  3. C(M)LESS-THAN-1 STRING THEORY AS A CONSTRAINED TOPOLOGICAL SIGMA-MODEL

    NARCIS (Netherlands)

    LLATAS, PM; ROY, S

    1995-01-01

    It has been argued by Ishikawa and Kato that by making use of a specific bosonization, c(M) = 1 string theory can be regarded as a constrained topological sigma model. We generalize their construction for any (p,q) minimal model coupled to two dimensional (2d) gravity and show that the energy-moment

  4. View-constrained latent variable model for multi-view facial expression classification

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2014-01-01

    We propose a view-constrained latent variable model for multi-view facial expression classification. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, followed by the expression classification in the shared manifold. For learning, we use the expr

  5. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  6. Modeling Power-Constrained Optimal Backlight Dimming for Color Displays

    DEFF Research Database (Denmark)

    Burini, Nino; Nadernejad, Ehsan; Korhonen, Jari

    2013-01-01

    In this paper, we present a framework for modeling color liquid crystal displays (LCDs) having local light-emitting diode (LED) backlight with dimming capability. The proposed framework includes critical aspects like leakage, clipping, light diffusion and human perception of luminance and allows...

  7. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John;

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  8. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel;

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  9. Models for Near-Ridge Seamounts Constrained by Gravity Observations

    Science.gov (United States)

    Kostlan, M.; McClain, J. S.

    2009-12-01

    In an analysis of the seamount chain centered at 105°20’W, 9°05’N, west of the East Pacific Rise and south of the Clipperton transform fault, we compared measured free air gravity anomaly values with modeled gravity anomaly values. The seamount chain contains approximately ten seamounts trending roughly east-west, perpendicular to the mid-ocean ridge axis. They lie on lithosphere between 1.5 and 2.7 Ma in age. Based on its position and age, the seamount chain appears to be associated with the 9°03’N overlapping spreading center (OSC). This OSC includes several associated seamount chains, aligned generally east-west, and of varying ages. The observed data include both free air gravity anomalies and bathymetry of the seamount chain, provided by the National Geophysical Data Center (NGDC), and was selected because the gravity measurements are relatively well covered. We used a series of different structural models of the oceanic crust and mantle to generate gravity anomalies associated with the sea mounts. The models utilize Parker’s algorithm to generate these free air gravity anomalies. We compute a gravity residual by subtracting the calculated anomalies from the observed anomalies. The models include one with a crust of a constant thickness (6 km), while another introduces a constant-thickness Layer 2A. In contrast, a third model included a variable thickness crust, where the thickness is governed by Airy compensation. The calculations show that the seamounts must be partly compensated, because the constant-thickness models predict a high negative residual (or they produce an anomaly which is too high). In contrast, the Airy compensation model produces an anomaly that is too low at the longer wavelengths, indicating that the lithosphere must have some strength, and that flexure must be supporting part of the load of the seamount chain. This contrasts with earlier studies that indicate young, near-ridge seamounts do not result in flexure of the thin

  10. Modeling frictional melt injection to constrain coseismic physical conditions

    Science.gov (United States)

    Sawyer, William J.; Resor, Phillip G.

    2017-07-01

    Pseudotachylyte, a fault rock formed through coseismic frictional melting, provides an important record of coseismic mechanics. In particular, injection veins formed at a high angle to the fault surface have been used to estimate rupture directivity, velocity, pulse length, stress drop, as well as slip weakening distance and wall rock stiffness. These studies have generally treated injection vein formation as a purely elastic process and have assumed that processes of melt generation, transport, and solidification have little influence on the final vein geometry. Using a pressurized crack model, an analytical approximation of injection vein formation based on dike intrusion, we find that the timescales of quenching and flow propagation may be similar for a subset of injection veins compiled from the Asbestos Mountain Fault, USA, Gole Larghe Fault Zone, Italy, and the Fort Foster Brittle Zone, USA under minimum melt temperature conditions. 34% of the veins are found to be flow limited, with a final geometry that may reflect cooling of the vein before it reaches an elastic equilibrium with the wall rock. Formation of these veins is a dynamic process whose behavior is not fully captured by the analytical approach. To assess the applicability of simplifying assumptions of the pressurized crack we employ a time-dependent finite-element model of injection vein formation that couples elastic deformation of the wall rock with the fluid dynamics and heat transfer of the frictional melt. This finite element model reveals that two basic assumptions of the pressurized crack model, self-similar growth and a uniform pressure gradient, are false. The pressurized crack model thus underestimates flow propagation time by 2-3 orders of magnitude. Flow limiting may therefore occur under a wider range of conditions than previously thought. Flow-limited veins may be recognizable in the field where veins have tapered profiles or smaller aspect ratios than expected. The occurrence and

  11. Rule-based spatial modeling with diffusing, geometrically constrained molecules

    Directory of Open Access Journals (Sweden)

    Lohel Maiko

    2010-06-01

    Full Text Available Abstract Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS, we have chosen an already existing formalism (BioNetGen for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules. When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial

  12. Rule-based spatial modeling with diffusing, geometrically constrained molecules

    OpenAIRE

    Lohel Maiko; Lenser Thorsten; Ibrahim Bashar; Gruenert Gerd; Hinze Thomas; Dittrich Peter

    2010-01-01

    Abstract Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction net...

  13. Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo;

    2012-01-01

    Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat......Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow...

  14. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    Science.gov (United States)

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  15. Constrained prose recall and the assessment of long-term forgetting: the case of ageing and the Crimes Test.

    Science.gov (United States)

    Baddeley, Alan; Rawlings, Bruce; Hayes, Amie

    2014-01-01

    It has become increasingly clear that some patients with apparently normal memory may subsequently show accelerated long-term forgetting (ALF), with dramatic loss when retested. We describe a constrained prose recall task that attempts to lay the foundations for a test suitable for detecting ALF sensitively and economically. Instead of the usual narrative structure of prose recall tests, it employs a matrix structure involving four episodes, each describing a minor crime, with each crime involving the binding into a coherent episode of a specified range of features, involving the victim, the crime, the criminal and the location, allowing a total of 80 different probed recall questions to be generated. These are used to create four equivalent 20-item tests, three of which are used in the study. After a single verbal presentation, young and elderly participants were tested on three occasions, immediately, and by telephone after a delay of 6 weeks, and at one of a varied range of intermediate points. The groups were approximately matched on immediate test; both showed systematic forgetting which was particularly marked in the elderly. We suggest that constrained prose recall has considerable potential for the study of long-term forgetting.

  16. Constraining Galaxy Formation Models with Dwarf Ellipticals in Clusters

    CERN Document Server

    Conselice, C J

    2005-01-01

    Recent observations demonstrate that dwarf elliptical (dE) galaxies in clusters, despite their faintness, are likely a critical galaxy type for understanding the processes behind galaxy formation. Dwarf ellipticals are the most common galaxy type, and are particularly abundant in rich galaxy clusters. The dwarf to giant ratio is in fact highest in rich clusters of galaxies, suggesting that cluster dEs do not form in groups that later merge to form clusters. Dwarf ellipticals are potentially the only galaxy type whose formation is sensitive to global, rather than local, environment. The dominant idea for explaining the formation of these systems, through Cold Dark Matter models, is that dEs form early and within their present environments. Recent results suggest that some dwarfs appear in clusters after the bulk of massive galaxies form, a scenario not predicted in standard hierarchical structure formation models. Many dEs have younger and more metal rich stellar populations than dwarfs in lower density enviro...

  17. Constraining quantum collapse inflationary models with CMB data

    Science.gov (United States)

    Benetti, Micol; Landau, Susana J.; Alcaniz, Jailson S.

    2016-12-01

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze the possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.

  18. Slow Solar Wind: Observable Characteristics for Constraining Modelling

    Science.gov (United States)

    Ofman, L.; Abbo, L.; Antiochos, S. K.; Hansteen, V. H.; Harra, L.; Ko, Y. K.; Lapenta, G.; Li, B.; Riley, P.; Strachan, L.; von Steiger, R.; Wang, Y. M.

    2015-12-01

    The Slow Solar Wind (SSW) origin is an open issue in the post SOHO era and forms a major objective for planned future missions such as the Solar Orbiter and Solar Probe Plus.Results from spacecraft data, combined with theoretical modeling, have helped to investigate many aspects of the SSW. Fundamental physical properties of the coronal plasma have been derived from spectroscopic and imaging remote-sensing data and in-situ data, and these results have provided crucial insights for a deeper understanding of the origin and acceleration of the SSW.Advances models of the SSW in coronal streamers and other structures have been developed using 3D MHD and multi-fluid equations.Nevertheless, there are still debated questions such as:What are the source regions of SSW? What are their contributions to the SSW?Which is the role of the magnetic topology in corona for the origin, acceleration and energy deposition of SSW?Which are the possible acceleration and heating mechanisms for the SSW?The aim of this study is to present the insights on the SSW origin and formationarisen during the discussions at the International Space Science Institute (ISSI) by the Team entitled ''Slowsolar wind sources and acceleration mechanisms in the corona'' held in Bern (Switzerland) in March2014--2015. The attached figure will be presented to summarize the different hypotheses of the SSW formation.

  19. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-08-01

    parameter clustering was found for forested model units. We hypothesize that this is due to the presence of 2 dominant forest types that differ substantially in their moisture regime. Therefore, this could indicate that the spatial discretization used in this study is oversimplified.

    This constraining step with remotely sensed data is useful for Bayesian updating in ungauged catchments. To this end trapezoidal shaped fuzzy membership functions were constructed that can be used to constrain parameter realizations in a second calibration step if more data becomes available. Especially in semi-arid areas such as the Luangwa basin, traditional rainfall-runoff calibration should be preceded by this step because evaporation represents a much larger term in the water balance than discharge and because it imposes spatial variability in the water balance. It justifies that land surface related parameters are distributed. Furthermore, the analysis reveals where hydrological processes may be ill-defined in the model structure and how accurate our spatial discretization is.

  20. A nonlinear model for rotationally constrained convection with Ekman pumping

    CERN Document Server

    Julien, Keith; Calkins, Michael A; Knobloch, Edgar; Marti, Philippe; Stellmach, Stephan; Vasil, Geoffrey M

    2016-01-01

    It is a well established result of linear theory that the influence of differing mechanical boundary conditions, i.e., stress-free or no-slip, on the primary instability in rotating convection becomes asymptotically small in the limit of rapid rotation. This is accounted for by the diminishing impact of the viscous stresses exerted within Ekman boundary layers and the associated vertical momentum transport by Ekman pumping. By contrast, in the nonlinear regime recent experiments and supporting simulations are now providing evidence that the efficiency of heat transport remains strongly influenced by Ekman pumping in the rapidly rotating limit. In this paper, a reduced model is developed for the case of low Rossby number convection in a plane layer geometry with no-slip upper and lower boundaries held at fixed temperatures. A complete description of the dynamics requires the existence of three distinct regions within the fluid layer: a geostrophically balanced interior where fluid motions are predominately ali...

  1. Allometric functional response model: body masses constrain interaction strengths.

    Science.gov (United States)

    Vucic-Pestic, Olivera; Rall, Björn C; Kalinkat, Gregor; Brose, Ulrich

    2010-01-01

    1. Functional responses quantify the per capita consumption rates of predators depending on prey density. The parameters of these nonlinear interaction strength models were recently used as successful proxies for predicting population dynamics, food-web topology and stability. 2. This study addressed systematic effects of predator and prey body masses on the functional response parameters handling time, instantaneous search coefficient (attack coefficient) and a scaling exponent converting type II into type III functional responses. To fully explore the possible combinations of predator and prey body masses, we studied the functional responses of 13 predator species (ground beetles and wolf spiders) on one small and one large prey resulting in 26 functional responses. 3. We found (i) a power-law decrease of handling time with predator mass with an exponent of -0.94; (ii) an increase of handling time with prey mass (power-law with an exponent of 0.83, but only three prey sizes were included); (iii) a hump-shaped relationship between instantaneous search coefficients and predator-prey body-mass ratios; and (iv) low scaling exponents for low predator-prey body mass ratios in contrast to high scaling exponents for high predator-prey body-mass ratios. 4. These scaling relationships suggest that nonlinear interaction strengths can be predicted by knowledge of predator and prey body masses. Our results imply that predators of intermediate size impose stronger per capita top-down interaction strengths on a prey than smaller or larger predators. Moreover, the stability of population and food-web dynamics should increase with increasing body-mass ratios in consequence of increases in the scaling exponents. 5. Integrating these scaling relationships into population models will allow predicting energy fluxes, food-web structures and the distribution of interaction strengths across food web links based on knowledge of the species' body masses.

  2. Constraining quantum collapse inflationary models with CMB data

    CERN Document Server

    Benetti, Micol; Alcaniz, Jailson S

    2016-01-01

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze the possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmolog...

  3. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems.

    Science.gov (United States)

    Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas

    2017-08-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.

  4. Building a Predictive Model of Galaxy Formation - I: Phenomenological Model Constrained to the $z=0$ Stellar Mass Function

    CERN Document Server

    Benson, A J

    2014-01-01

    We constrain a highly simplified semi-analytic model of galaxy formation using the $z\\approx 0$ stellar mass function of galaxies. Particular attention is paid to assessing the role of random and systematic errors in the determination of stellar masses, to systematic uncertainties in the model, and to correlations between bins in the measured and modeled stellar mass functions, in order to construct a realistic likelihood function. We derive constraints on model parameters and explore which aspects of the observational data constrain particular parameter combinations. We find that our model, once constrained, provides a remarkable match to the measured evolution of the stellar mass function to $z=1$, although fails dramatically to match the local galaxy HI mass function. Several "nuisance parameters" contribute significantly to uncertainties in model predictions. In particular, systematic errors in stellar mass estimate are the dominant source of uncertainty in model predictions at $z\\approx 1$, with addition...

  5. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  6. Bayesian Exploratory and Confirmatory Factor Analysis: Perspectives on Constrained-Model Selection

    NARCIS (Netherlands)

    Peeters, C.F.W.

    2012-01-01

    The dissertation revolves around three aims. The first aim is the construction of a conceptually and computationally simple Bayes factor for Type I constrained-model selection (dimensionality determination) that is determinate under usage of improper priors and the subsequent utilization of this

  7. Constrained WZWN models on G/{S⊗U(1)"n} and exchange algebra of G-primaries

    Energy Technology Data Exchange (ETDEWEB)

    Aoyama, Shogo, E-mail: spsaoya@ipc.shizuoka.ac.jp; Ishii, Katsuyuki

    2013-11-11

    Consistently constrained WZWN models on G/{S⊗U(1)"n} is given by constraining currents of the WZWN models with G. Poisson brackets are set up on the light-like plane. Using them we show the Virasoro algebra for the energy–momentum tensor of constrained WZWN models. We find a G-primary which satisfies a classical exchange algebra in an arbitrary representation of G. The G-primary and the constrained currents are also shown to obey the conformal transformation with respect to the energy–momentum tensor. It is checked that conformal weight of the constrained currents is 0. This is necessary for the consistency for our formulation of constrained WZWN models.

  8. Increased NR2A:NR2B ratio compresses long-term depression range and constrains long-term memory.

    Science.gov (United States)

    Cui, Zhenzhong; Feng, Ruiben; Jacobs, Stephanie; Duan, Yanhong; Wang, Huimin; Cao, Xiaohua; Tsien, Joe Z

    2013-01-01

    The NR2A:NR2B subunit ratio of the NMDA receptors is widely known to increase in the brain from postnatal development to sexual maturity and to aging, yet its impact on memory function remains speculative. We have generated forebrain-specific NR2A overexpression transgenic mice and show that these mice had normal basic behaviors and short-term memory, but exhibited broad long-term memory deficits as revealed by several behavioral paradigms. Surprisingly, increased NR2A expression did not affect 1-Hz-induced long-term depression (LTD) or 100 Hz-induced long-term potentiation (LTP) in the CA1 region of the hippocampus, but selectively abolished LTD responses in the 3-5 Hz frequency range. Our results demonstrate that the increased NR2A:NR2B ratio is a critical genetic factor in constraining long-term memory in the adult brain. We postulate that LTD-like process underlies post-learning information sculpting, a novel and essential consolidation step in transforming new information into long-term memory.

  9. Modelli di crescita limitata dalla bilancia dei pagamenti: storia e panoramica (Balance of payments constrained growth models: history and overview

    Directory of Open Access Journals (Sweden)

    Anthony P. Thirlwall

    2012-01-01

    Full Text Available Thirlwall’s 1979 balance of payments constrained growth model predicts that a country’s long run growth of GDP can be approximated by the ratio of the growth of real exports to the income elasticity of demand for imports assuming negligible effects from real exchange rate movements. The paper surveys developments of the model since then, allowing for capital flows, interest payments on debt, terms of trade movements, and disaggregation of the model by commodities and trading partners. Various tests of the model are discussed, and an extensive list of papers that have examined the model is presented.  JEL Codes: F32; F40; F43Keywords: Balance of payments; growth; Thirlwall’s Law; dynamic Harrod multiplier

  10. How to constrain multi-objective calibrations using water balance components for an improved realism of model results

    Science.gov (United States)

    Pfannerstill, Matthias; Bieger, Katrin; Guse, Björn; Bosch, David; Fohrer, Nicola; Arnold, Jeffrey G.

    2017-04-01

    Accurate discharge simulation is one of the most common objectives of hydrological modeling studies. However, a good simulation of discharge is not necessarily the result of a realistic simulation of hydrological processes within the catchment. To enhance the realism of model results, we propose an evaluation framework that considers both discharge and water balance components as evaluation criteria for hydrological models. In this study, we integrated easily available expert knowledge such as average annual values of surface runoff, groundwater flow, and evapotranspiration in the model evaluation procedure to constrain the selection of good model runs. For evaluating water balance and discharge dynamics, the Nash-Sutcliffe efficiency (NSE) and percent bias (PBIAS) were used. In addition, the ratio of root mean square error and standard deviation of measured data (RSR) was calculated for individual segments of the flow duration curve to identify the best model runs in terms of discharge magnitude. Our results indicate that good statistics for discharge do not guarantee realistic simulations of individual water balance components. Therefore, we recommend constraining the ranges of water balance components to better capture internal and external fluxes of the hydrological system, even if trade-offs between good statistics for discharge simulations and reasonable amounts of the water balance components are unavoidable.

  11. Constraining a matter-dominated cosmological model with bulk viscosity proportional to the Hubble parameter

    CERN Document Server

    Avelino, Arturo

    2008-01-01

    We present and constrain a cosmological model where the only component is a pressureless fluid with bulk viscosity as an explanation for the present accelerated expansion of the universe. We study the particular model of a bulk viscosity coefficient proportional to the Hubble parameter. The model is constrained using the SNe Ia Gold 2006 sample, the Cosmic Microwave Background (CMB) shift parameter R, the Baryon Acoustic Oscillation (BAO) peak A and the Second Law of Thermodynamics (SLT). It was found that this model is in agreement with the SLT using only the SNe Ia test. However when the model is constrained using the three cosmological tests together (SNe+CMB+BAO) we found: 1.- The model violates the SLT, 2.- It predicts a value of H_0 \\approx 53 km sec^{-1} Mpc^{-1} for the Hubble constant, and 3.- We obtain a bad fit to data with a \\chi^2_{min} \\approx 532. These results indicate that this model is viable just if the bulk viscosity is triggered in recent times.

  12. 3D Geological Model of Nihe ore deposit Constrained by Gravity and Magnetic Modeling

    Science.gov (United States)

    Qi, Guang; Yan, Jiayong; Lv, Qingtan; Zhao, Jinhua

    2016-04-01

    observed data, and then adjust the model until a satisfactory accuracy of errors is achieved. It is hope that this work can provide reference for similar work in other areas. this study shows that the research of geologic constrained 3D gravity and magnetic modeling has potential value in the aspects of deep mineral exploration and mineral reserves estimation.

  13. Min-max model predictive control for constrained nonlinear systems via multiple LPV embeddings

    Institute of Scientific and Technical Information of China (English)

    ZHAO Min; LI Ning; LI ShaoYuan

    2009-01-01

    A min-max model predictive control strategy is proposed for a class of constrained nonlinear system whose trajectories can be embedded within those of a bank of linear parameter varying (LPV) models. The embedding LPV models can yield much better approximation of the nonlinear system dynamics than a single LTV model. For each LPV model, a parameter-dependent Lyapunov function is introduced to obtain poly-quadratically stable control law and to guarantee the feasibility and stability of the original nonlinear system. This approach can greatly reduce computational burden in traditional nonlinear predictive control strategy. Finally a simulation example illustrating the strategy is presented.

  14. A generalized network flow model for the multi-mode resource-constrained project scheduling problem with discounted cash flows

    Science.gov (United States)

    Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan

    2015-02-01

    An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.

  15. Ways to constrain neutron star equation of state models using relativistic disc lines

    CERN Document Server

    Bhattacharyya, Sudip

    2011-01-01

    Relativistic spectral lines from the accretion disc of a neutron star low-mass X-ray binary can be modelled to infer the disc inner edge radius. A small value of this radius tentatively implies that the disc terminates either at the neutron star hard surface, or at the innermost stable circular orbit (ISCO). Therefore an inferred disc inner edge radius either provides the stellar radius, or can directly constrain stellar equation of state (EoS) models using the theoretically computed ISCO radius for the spacetime of a rapidly spinning neutron star. However, this procedure requires numerical computation of stellar and ISCO radii for various EoS models and neutron star configurations using an appropriate rapidly spinning stellar spacetime. We have fully general relativistically calculated about 16000 stable neutron star structures to explore and establish the above mentioned procedure, and to show that the Kerr spacetime is inadequate for this purpose. Our work systematically studies the methods to constrain Eo...

  16. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product......, the crisp equivalent of the fuzzy constraints should be less relaxed than that purely based on the concept of soft constraints. Based on the application context we adopt a strengthened version of soft constraints to interpret fuzzy constraints and form a crisp model with consistent and compact constraints...... for solution. Simulation results based on realistic data show that the failure risk can be managed by proper combination of aspiration levels and confidence factors for defining fuzzy numbers. There is a tradeoff between failure risk and material cost. The presented approach applies also for other scrap...

  17. Constraining a semi-distributed, conceptual hydrological model on evaporation - a case study for the Kulpawn River Basin, Ghana

    Science.gov (United States)

    Nijzink, Remko C.; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-04-01

    Hydrological models are typically calibrated on stream flow observations. However, such data are frequently not available. In addition, in many parts of the world not stream flow, but rather evaporation and transpiration are the largest fluxes from hydrological systems. Nevertheless, models trained to evaporation data are rare and typically rely on evaporation estimates that were themselves also derived from models, thereby considerably reducing the robustness of such approaches. In this study, we test the power of alternative approaches to constrain semi-distributed, conceptual models with information on evaporation in the absence of stream flow data. By gradually increasing the constraining information, the analysis is designed in a stepwise way. Both, the models and the relevance of the added information are evaluated for each step. As a first step, a large set of random parameter sets from uniform prior distributions is generated. Subsequently, parameter sets that cannot produce model outputs that satisfy the added constraints are discarded. The information content of these constraints will be gradually increased by making use of the Budyko framework: (1) the model has to reproduce the long-term average actual evaporation of the system, as indicated by the position in the Budyko framework, (2) the model, similarly, has to reproduce the long-term average seasonal variations of actual evaporation, (3) the model has to reproduce the temporal variations of evaporation, e.g. differences between 5-year mean evaporation of different periods, as indicated by different positions in the Budyko framework. As a last step, the model's temporal dynamics in the root zone moisture content are constrained by comparing it to time series of the NDII (Normalized Difference Infrared Index), which has recently been shown to be a close proxy for plant available water in the root zone and, thus, for transpiration rates ( Sriwongsitanon et al., 2015). The value of the information

  18. Using qflux to constrain modeled Congo Basin rainfall in the CMIP5 ensemble

    Science.gov (United States)

    Creese, A.; Washington, R.

    2016-11-01

    Coupled models are the tools by which we diagnose and project future climate, yet in certain regions they are critically underevaluated. The Congo Basin is one such region which has received limited scientific attention, due to the severe scarcity of observational data. There is a large difference in the climatology of rainfall in global coupled climate models over the basin. This study attempts to address this research gap by evaluating modeled rainfall magnitude and distribution amongst global coupled models in the Coupled Model Intercomparison Project 5 (CMIP5) ensemble. Mean monthly rainfall between models varies by up to a factor of 5 in some months, and models disagree on the location of maximum rainfall. The ensemble mean, which is usually considered a "best estimate" of coupled model output, does not agree with any single model, and as such is unlikely to present a possible rainfall state. Moisture flux (qflux) convergence (which is assumed to be better constrained than parameterized rainfall) is found to have a strong relationship with rainfall; strongest correlations occur at 700 hPa in March-May (r = 0.70) and 850 hPa in June-August, September-November, and December-February (r = 0.66, r = 0.71, and r = 0.81). In the absence of observations, this relationship could be used to constrain the wide spectrum of modeled rainfall and give a better understanding of Congo rainfall climatology. Analysis of moisture transport pathways indicates that modeled rainfall is sensitive to the amount of moisture entering the basin. A targeted observation campaign at key Congo Basin boundaries could therefore help to constrain model rainfall.

  19. Constraining spatial variations of the fine-structure constant in symmetron models

    Science.gov (United States)

    Pinho, A. M. M.; Martinelli, M.; Martins, C. J. A. P.

    2017-06-01

    We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log ⁡β2 constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  20. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  1. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H ii bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  2. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  3. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  4. Short-term and long-term earthquake occurrence models for Italy: ETES, ERS and LTST

    Directory of Open Access Journals (Sweden)

    Maura Murru

    2010-11-01

    Full Text Available This study describes three earthquake occurrence models as applied to the whole Italian territory, to assess the occurrence probabilities of future (M ≥5.0 earthquakes: two as short-term (24 hour models, and one as long-term (5 and 10 years. The first model for short-term forecasts is a purely stochastic epidemic type earthquake sequence (ETES model. The second short-term model is an epidemic rate-state (ERS forecast based on a model that is physically constrained by the application to the earthquake clustering of the Dieterich rate-state constitutive law. The third forecast is based on a long-term stress transfer (LTST model that considers the perturbations of earthquake probability for interacting faults by static Coulomb stress changes. These models have been submitted to the Collaboratory for the Study of Earthquake Predictability (CSEP for forecast testing for Italy (ETH-Zurich, and they were locked down to test their validity on real data in a future setting starting from August 1, 2009.

  5. Constraining predictions of tundra permafrost and vegetation through model-data feedbacks and data-assimilation

    Science.gov (United States)

    Davidson, C. D.; Dietze, M.

    2011-12-01

    temperature at which photosynthesis could occur and the allocation of resources to different plant tissues. Based on these findings a targeted field campaign was conducted at the Toolik LTER. Temperature response curves for the top 12 most common species found significant carbon assimilation at leaf temperatures below 0 C for most species. Individual-level plant harvests across six different vegetation classes were used to construct leaf, stem, and root allometries, while individual-level plant growth and plot-level biomass data were incorporated using Bayesian data-assimilation techniques. Model parameters were further constrained by also assimilating eddy-covariance data on carbon and moisture fluxes from the Atquasuk and Barrow Ameriflux towers. Finally, ED2 was validated against long-term plot measurements at Toolik and flux data from Ivotuk and Happy Valley. Projections at each site were made using an ensemble approach in order to propagate model uncertainties. Ongoing work is now focused on regional-scale validation against remotely sensed data and on coupling the ED2 model to the ALFRESCO landscape fire model.

  6. Constraining snow model choices in a transitional snow environment with intensive observations

    Science.gov (United States)

    Wayand, N. E.; Massmann, A.; Clark, M. P.; Lundquist, J. D.

    2014-12-01

    The performance of existing energy balance snow models exhibits a large spread in the simulated snow water equivalent, snow depth, albedo, and surface temperature. Indentifying poor model representations of physical processes within intercomparison studies is difficult due to multiple differences between models as well as non-orthogonal metrics used. Efforts to overcome these obstacles for model development have focused on a modeling framework that allows multiple representations of each physical process within one structure. However, there still exists a need for snow study sites within complex terrain that observe enough model states and fluxes to constrain model choices. In this study we focus on an intensive snow observational site located in the maritime-transitional snow climate of Snoqualmie Pass WA (Figure 1). The transitional zone has been previously identified as a difficult climate to simulate snow processes; therefore, it represents an ideal model-vetting site. From two water years of intensive observational data, we have learned that a more honest comparison with observations requires that the modeled states or fluxes be as similar to the spatial and temporal domain of the instrument, even if it means changing the model to match what is being observed. For example, 24-hour snow board observations do not capture compaction of the underlying snow; therefore, a modeled "snow board" was created that only includes new snow accumulation and new snow compaction. We extend this method of selective model validation to all available Snoqualmie observations to constrain model choices within the Structure for Understanding Multiple Modeling Alternatives (SUMMA) framework. Our end goal is to provide a more rigorous and systematic method for diagnosing problems within snow models at a site given numerous snow observations.

  7. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    Science.gov (United States)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  8. Constraining a bulk viscous matter-dominated cosmological model using SNe Ia, CMB and LSS

    CERN Document Server

    Avelino, Arturo; Guzmán, F S

    2008-01-01

    We present and constrain a cosmological model which component is a pressureless fluid with bulk viscosity as an explanation for the present accelerated expansion of the universe. We study the particular model of a constant bulk viscosity coefficient \\zeta_m. The possible values of \\zeta_m are constrained using the cosmological tests of SNe Ia Gold 2006 sample, the CMB shift parameter R from the three-year WMAP observations, the Baryon Acoustic Oscillation (BAO) peak A from the Sloan Digital Sky Survey (SDSS) and the Second Law of Thermodynamics (SLT). It was found that this model is in agreement with the SLT using only the SNe Ia test. However when the model is submitted to the three cosmological tests together (SNe+CMB+BAO) the results are: 1.- the model violates the SLT, 2.- predicts a value of H_0 \\approx 53 km sec^{-1} Mpc^{-1} for the Hubble constant, and 3.- we obtain a bad fit to data with a \\chi^2_{min} \\approx 400 (\\chi^2_{d.o.f.} \\approx 2.2). These results indicate that this model is ruled out by t...

  9. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  10. Damage detection and model refinement using elemental stiffness perturbations with constrained connectivity

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, S.W.

    1996-04-01

    A new optimal update method for the correlation of dynamic structural finite element models with modal data is presented. The method computes a minimum-rank solution for the perturbations of the elemental stiffness parameters while constraining the connectivity of the global stiffness matrix. The resulting model contains a more accurate representation of the dynamics of the test structure. The changes between the original model and the updated model can be interpreted as modeling errors or as changes in the structure resulting from damage. The motivation for the method is presented in the context of existing optimal matrix update procedures. The method is demonstrated numerically on a spring-mass system and is also applied to experimental data from the NASA Langley 8-bay truss damage detection experiment. The results demonstrate that the proposed procedure may be useful for updating elemental stiffness parameters in the context of damage detection and model refinement.

  11. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  12. Constrained solution scattering modelling of human antibodies and complement proteins reveals novel biological insights.

    Science.gov (United States)

    Perkins, Stephen J; Okemefuna, Azubuike I; Nan, Ruodan; Li, Keying; Bonner, Alexandra

    2009-10-06

    X-ray and neutron-scattering techniques characterize proteins in solution and complement high-resolution structural studies. They are useful when either a large protein cannot be crystallized, in which case scattering yields a solution structure, or a crystal structure has been determined and requires validation in solution. These solution structures are determined by the application of constrained modelling methods based on known subunit structures. First, an appropriate starting model is generated. Next, its conformation is randomized to generate thousands of models for trial-and-error fits. Comparison with the experimental data identifies a small family of best-fit models. Finally, their significance for biological function is assessed. We illustrate this in application to structure determinations for secretory immunoglobulin A, the most prevalent antibody in the human body and a first line of defence in mucosal immunity. We also discuss the applications to the large multi-domain proteins of the complement system, most notably its major regulator factor H, which is important in age-related macular degeneration and renal diseases. We discuss the importance of complementary data from analytical ultracentrifugation, and structural studies of protein-protein complexes. We conclude that constrained scattering modelling makes useful contributions to our understanding of antibody and complement structure and function.

  13. Maximum entropy production: can it be used to constrain conceptual hydrological models?

    Directory of Open Access Journals (Sweden)

    M. C. Westhoff

    2013-08-01

    Full Text Available In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in literature, generally little guidance has been given on how to apply the principle. The aim of this paper is to use the maximum power principle – which is closely related to MEP – to constrain parameters of a simple conceptual (bucket model. Although, we had to conclude that conceptual bucket models could not be constrained with respect to maximum power, this study sheds more light on how to use and how not to use the principle. Several of these issues have been correctly applied in other studies, but have not been explained or discussed as such. While other studies were based on resistance formulations, where the quantity to be optimized is a linear function of the resistance to be identified, our study shows that the approach also works for formulations that are only linear in the log-transformed space. Moreover, we showed that parameters describing process thresholds or influencing boundary conditions cannot be constrained. We furthermore conclude that, in order to apply the principle correctly, the model should be (1 physically based; i.e. fluxes should be defined as a gradient divided by a resistance, (2 the optimized flux should have a feedback on the gradient; i.e. the influence of boundary conditions on gradients should be minimal, (3 the temporal scale of the model should be chosen in such a way that the parameter that is optimized is constant over the modelling period, (4 only when the correct feedbacks are implemented the fluxes can be correctly optimized and (5 there should be a trade-off between two or more fluxes. Although our application of the maximum power principle did

  14. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  15. Using pi_2(1670) -> b_1(1235) pi to Constrain Hadronic Models

    CERN Document Server

    Page, P R; Page, Philip R.; Capstick, Simon

    2003-01-01

    We show that current analyses of experimental data indicate that the strong decay mode pi_2 -> b_1 pi is anomalously small. Non-relativistic quark models with spin-1 quark pair creation, such as ^3P_0, ^3S_1 and ^3D_1 models, as well as instanton and lowest order one-boson (in this case pi) emission models, can accommodate the analyses of experimental data, because of a quark-spin selection rule. Models and effects that violate this selection rule, such as higher order one-boson emission models, as well as mixing with other Fock states, may be constrained by the small pi_2 -> b_1 pi decay. This can provide a viability check on newly proposed decay mechanisms. We show that for mesons made up of a heavy quark and anti-quark, the selection rule is exact to all orders of Quantum Chromodynamics (QCD) perturbation theory.

  16. An inner-outer nonlinear programming approach for constrained quadratic matrix model updating

    Science.gov (United States)

    Andretta, M.; Birgin, E. G.; Raydan, M.

    2016-01-01

    The Quadratic Finite Element Model Updating Problem (QFEMUP) concerns with updating a symmetric second-order finite element model so that it remains symmetric and the updated model reproduces a given set of desired eigenvalues and eigenvectors by replacing the corresponding ones from the original model. Taking advantage of the special structure of the constraint set, it is first shown that the QFEMUP can be formulated as a suitable constrained nonlinear programming problem. Using this formulation, a method based on successive optimizations is then proposed and analyzed. To avoid that spurious modes (eigenvectors) appear in the frequency range of interest (eigenvalues) after the model has been updated, additional constraints based on a quadratic Rayleigh quotient are dynamically included in the constraint set. A distinct practical feature of the proposed method is that it can be implemented by computing only a few eigenvalues and eigenvectors of the associated quadratic matrix pencil.

  17. Computational models for simulations of lithium-ion battery cells under constrained compression tests

    Science.gov (United States)

    Ali, Mohammed Yusuf; Lai, Wei-Jen; Pan, Jwo

    2013-11-01

    In this paper, computational models are developed for simulations of representative volume element (RVE) specimens of lithium-ion battery cells under in-plane constrained compression tests. For cell components in the finite element analyses, the effective compressive moduli are obtained from in-plane constrained compressive tests, the Poisson's ratios are based on the rule of mixture, and the stress-plastic strain curves are obtained from the tensile tests and the rule of mixture. The Gurson's material model is adopted to account for the effect of porosity in separator and electrode sheets. The computational results show that the computational models can be used to examine the micro buckling of the component sheets, the macro buckling of the cell RVE specimens, and the formation of the kinks and shear bands observed in experiments, and to simulate the load-displacement curves of the cell RVE specimens. The initial micro buckling mode of the cover sheets in general agrees with that of an approximate elastic buckling solution. Based on the computational models, the effects of the friction on the deformation pattern and void compaction are identified. Finally, the effects of the initial clearance and biaxial compression on the deformation patterns of the cell RVE specimens are demonstrated.

  18. Constrained generalized predictive control of battery charging process based on a coupled thermoelectric model

    Science.gov (United States)

    Liu, Kailong; Li, Kang; Zhang, Cheng

    2017-04-01

    Battery temperature is a primary factor affecting the battery performance, and suitable battery temperature control in particular internal temperature control can not only guarantee battery safety but also improve its efficiency. This is however challenging as current controller designs for battery charging have no mechanisms to incorporate such information. This paper proposes a novel battery charging control strategy which applies the constrained generalized predictive control (GPC) to charge a LiFePO4 battery based on a newly developed coupled thermoelectric model. The control target primarily aims to maintain the battery cell internal temperature within a desirable range while delivering fast charging. To achieve this, the coupled thermoelectric model is firstly introduced to capture the battery behaviours in particular SOC and internal temperature which are not directly measurable in practice. Then a controlled auto-regressive integrated moving average (CARIMA) model whose parameters are identified by the recursive least squares (RLS) algorithm is developed as an online self-tuning predictive model for a GPC controller. Then the constrained generalized predictive controller is developed to control the charging current. Experiment results confirm the effectiveness of the proposed control strategy. Further, the best region of heat dissipation rate and proper internal temperature set-points are also investigated and analysed.

  19. Constraining the MIT Bag Model of Quark Matter with Gravitational Wave Observations

    CERN Document Server

    Benhar, O; Gualtieri, L; Marassi, S; Benhar, Omar; Ferrari, Valeria; Gualtieri, Leonardo; Marassi, Stefania

    2006-01-01

    Most theoretical studies of strange stars are based on the MIT bag model of quark matter, whose main parameter, the bag constant B, is only loosely constrained by phenomenology. We discuss the possibility that detection of gravitational waves emitted by a compact star may provide information on both the nature of the source and the value of B. Our results show that the combined knowledge of the frequency of the emitted gravitational wave and of the mass or the radiation radius of the source allows one to discriminate between strange stars and neutron stars and set stringent bounds on the bag constants.

  20. Bayesian Evaluation of inequality-constrained Hypotheses in SEM Models using Mplus.

    Science.gov (United States)

    van de Schoot, Rens; Hoijtink, Herbert; Hallquist, Michael N; Boelen, Paul A

    2012-10-01

    Researchers in the behavioural and social sciences often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model resulting in an informative hypothesis. The question they would like an answer to is "Is the Hypothesis Correct" or "Is the hypothesis incorrect?". We demonstrate a Bayesian approach to compare an inequality-constrained hypothesis with its complement in an SEM framework. The method is introduced and its utility is illustrated by means of an example. Furthermore, the influence of the specification of the prior distribution is examined. Finally, it is shown how the approach proposed can be implemented using Mplus.

  1. Spinal 5-HT7 receptors and protein kinase A constrain intermittent hypoxia-induced phrenic long-term facilitation.

    Science.gov (United States)

    Hoffman, M S; Mitchell, G S

    2013-10-10

    Phrenic long-term facilitation (pLTF) is a form of serotonin-dependent respiratory plasticity induced by acute intermittent hypoxia (AIH). pLTF requires spinal Gq protein-coupled serotonin-2 receptor (5-HT2) activation, new synthesis of brain-derived neurotrophic factor (BDNF) and activation of its high-affinity receptor, TrkB. Intrathecal injections of selective agonists for Gs protein-coupled receptors (adenosine 2A and serotonin-7; 5-HT7) also induce long-lasting phrenic motor facilitation via TrkB "trans-activation." Since serotonin released near phrenic motor neurons may activate multiple serotonin receptor subtypes, we tested the hypothesis that 5-HT7 receptor activation contributes to AIH-induced pLTF. A selective 5-HT7 receptor antagonist (SB-269970, 5mM, 12 μl) was administered intrathecally at C4 to anesthetized, vagotomized and ventilated rats prior to AIH (3, 5-min episodes, 11% O2). Contrary to predictions, pLTF was greater in SB-269970 treated versus control rats (80 ± 11% versus 45 ± 6% 60 min post-AIH; p5-HT7 receptor inhibition, suggesting that drug effects were localized to the spinal cord. Since 5-HT7 receptors are coupled to protein kinase A (PKA), we tested the hypothesis that PKA inhibits AIH-induced pLTF. Similar to 5-HT7 receptor inhibition, spinal PKA inhibition (KT-5720, 100 μM, 15 μl) enhanced pLTF (99 ± 15% 60 min post-AIH; p5-HT7 receptors constrain AIH-induced pLTF via PKA activity.

  2. Constraining the EoR model parameters with the 21cm bispectrum

    CERN Document Server

    Shimabukuro, Hayato; Takahashi, Keitaro; Yokoyama, Shuichiro; Ichiki, Kiyotomo

    2016-01-01

    We perform a Fisher analysis to estimate expected constraints on the Epoch of Reionization (EoR) model parameters (minimum virial temperature, ionizing efficiency, mean free path of ionizing photons) considering with thermal noise of ongoing telescopes, MWA and LOFAR. We consider how the inclusion of the 21cm bispectrum improves the constraints compared with the power spectrum alone. With assumption that we perfectly remove foreground, we found that the bispectrum, which is calculated by 21cmFAST, can constrain the EoR model parameters more tightly than the power spectrum since the bispectrum is more sensitive to the EoR model parameters than the power spectrum. We also found that degeneracy among the EoR model parameters can be broken by combining the bispectrum with the power spectrum.

  3. Constraining decaying dark energy density models with the CMB temperature-redshift relation

    CERN Document Server

    Jetzer, Philippe

    2012-01-01

    We discuss the thermodynamic and dynamical properties of a variable dark energy model with density scaling as $\\rho_x \\propto (1+z)^{m}$, z being the redshift. These models lead to the creation/disruption of matter and radiation, which affect the cosmic evolution of both matter and radiation components in the Universe. In particular, we have studied the temperature-redshift relation of radiation, which has been constrained using a recent collection of cosmic microwave background (CMB) temperature measurements up to $z \\sim 3$. We find that, within the uncertainties, the model is indistinguishable from a cosmological constant which does not exchange any particles with other components. Future observations, in particular measurements of CMB temperature at large redshift, will allow to give firmer bounds on the effective equation of state parameter $w_{eff}$ for such types of dark energy models.

  4. Good initialization model with constrained body structure for scene text recognition

    Science.gov (United States)

    Zhu, Anna; Wang, Guoyou; Dong, Yangbo

    2016-09-01

    Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.

  5. An Accurate Multimoment Constrained Finite Volume Transport Model on Yin-Yang Grids

    Institute of Scientific and Technical Information of China (English)

    LI Xingliang; SHEN Xueshun; PENG Xindong; XIAO Feng; ZHUANG Zhaorong; CHEN Chungang

    2013-01-01

    A global transport model is proposed in which a multimoment constrained finite volume (MCV) scheme is applied to a Yin-Yang overset grid.The MCV scheme defines 16 degrees of freedom (DOFs) within each element to build a 2D cubic reconstruction polynomial.The time evolution equations for DOFs are derived from constraint conditions on moments of line-integrated averages (LIA),point values (PV),and values of first-order derivatives (DV).The Yin-Yang grid eliminates polar singularities and results in a quasi-uniform mesh.A limiting projection is designed to remove nonphysical oscillations around discontinuities.Our model was tested against widely used benchmarks; the competitive results reveal that the model is accurate and promising for developing general circulation models.

  6. Finite Element Modeling of a Fluid Filled Cylindrical Shell with Active Constrained Layer Damping

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yi; ZHANG Zhi-yi; TONG Zong-peng; HUA Hong-xing

    2005-01-01

    On the basis of the piezoelectric theory, Mindlin plate theory, viscoelastic theory and ideal fluid equa tion, the finite element modeling of a fluid-filled cylindrical shell with active constrained layer damping (ACLD) was discussed. Energy methods and Lagrange's equation were used to obtain dynamic equations of the cylindrical shell with ACLD treatments, which was modeled as well with the finite element method. The GHM (Golla-Hughes-McTavish) method was applied to model the frequency dependent damping of viscoelastic material. Ideal and incompressible fluid was considered to establish the dynamic equations of the fluid-filled cylindrical shell with ACLD treatments, Numerical results obtained from the finite element analysis were compared with those from an experiment. The comparison shows that the proposed modeling method is accurate and reliable.

  7. Multi-variate spatial explicit constraining of a large scale hydrological model

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  8. Are we unnecessarily constraining the agility of complex process-based models?

    Science.gov (United States)

    Mendoza, Pablo A.; Clark, Martyn P.; Barlage, Michael; Rajagopalan, Balaji; Samaniego, Luis; Abramowitz, Gab; Gupta, Hoshin

    2015-01-01

    In this commentary we suggest that hydrologists and land-surface modelers may be unnecessarily constraining the behavioral agility of very complex physics-based models. We argue that the relatively poor performance of such models can occur due to restrictions on their ability to refine their portrayal of physical processes, in part because of strong a priori constraints in: (i) the representation of spatial variability and hydrologic connectivity, (ii) the choice of model parameterizations, and (iii) the choice of model parameter values. We provide a specific example of problems associated with strong a priori constraints on parameters in a land surface model. Moving forward, we assert that improving hydrological models requires integrating the strengths of the "physics-based" modeling philosophy (which relies on prior knowledge of hydrologic processes) with the strengths of the "conceptual" modeling philosophy (which relies on data driven inference). Such integration will accelerate progress on methods to define and discriminate among competing modeling options, which should be ideally incorporated in agile modeling frameworks and tested through a diagnostic evaluation approach.

  9. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  10. Constrained predictive control based on T-S fuzzy model for nonlinear systems

    Institute of Scientific and Technical Information of China (English)

    Su Baili; Chen Zengqiang; Yuan Zhuzhi

    2007-01-01

    A constrained generalized predictive control (GPC) algorithm based on the T-S fuzzy model is presented for the nonlinear system. First, a Takagi-Sugeno (T-S) fuzzy model based on the fuzzy cluster algorithm and the orthogonal least square method is constructed to approach the nonlinear system. Since its consequence is linear, it can divide the nonlinear system into a number of linear or nearly linear subsystems. For this T-S fuzzy model, a GPC algorithm with input constraints is presented.This strategy takes into account all the constraints of the control signal and its increment, and does not require the calculation of the Diophantine equations. So it needs only a small computer memory and the computational speed is high. The simulation results show a good performance for the nonlinear systems.

  11. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    Science.gov (United States)

    Ding, Lu; Luís Deán-Ben, X.; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-09-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.

  12. A Method to Constrain Genome-Scale Models with 13C Labeling Data.

    Directory of Open Access Journals (Sweden)

    Héctor García Martín

    2015-09-01

    Full Text Available Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA. This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems.

  13. A Bayesian Chance-Constrained Method for Hydraulic Barrier Design Under Model Structure Uncertainty

    Science.gov (United States)

    Chitsazan, N.; Pham, H. V.; Tsai, F. T. C.

    2014-12-01

    The groundwater community has widely recognized the model structure uncertainty as the major source of model uncertainty in groundwater modeling. Previous studies in the aquifer remediation design, however, rarely discuss the impact of the model structure uncertainty. This study combines the chance-constrained (CC) programming with the Bayesian model averaging (BMA) as a BMA-CC framework to assess the effect of model structure uncertainty in the remediation design. To investigate the impact of the model structure uncertainty on the remediation design, we compare the BMA-CC method with the traditional CC programming that only considers the model parameter uncertainty. The BMA-CC method is employed to design a hydraulic barrier to protect public supply wells of the Government St. pump station from saltwater intrusion in the "1,500-foot" sand and the "1-700-foot" sand of the Baton Rouge area, southeastern Louisiana. To address the model structure uncertainty, we develop three conceptual groundwater models based on three different hydrostratigraphy structures. The results show that using the traditional CC programming overestimates design reliability. The results also show that at least five additional connector wells are needed to achieve more than 90% design reliability level. The total amount of injected water from connector wells is higher than the total pumpage of the protected public supply wells. While reducing injection rate can be achieved by reducing reliability level, the study finds that the hydraulic barrier design to protect the Government St. pump station is not economically attractive.

  14. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Using Gas Kinematics To Constrain 3D Models of Disks: IC 2531

    CERN Document Server

    Eigenbrot, Arthur

    2013-01-01

    We use deep, longslit spectra of the nearby edge on galaxy IC 2531 to obtain gas kinematics out to 5 radial scale-lengths (40 kpc) and 4 vertical scale-heights (1.7 kpc). The large vertical range spanned by our data offers unique leverage to constrain three-dimensional models. The shape of the observed emission-line profiles offer insights to line-of-sight density distributions in the disk, and we discuss the possibility that we are seeing disk-flaring in the ionized gas. Finally, we begin to quantify measurements of line shape to allow model galaxies to be compared to data across all radii and heights simultaneously.

  16. Structural model of the Northern Latium volcanic area constrained by MT, gravity and aeromagnetic data

    Directory of Open Access Journals (Sweden)

    P. Gasparini

    1997-06-01

    Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.

  17. Angle- and distance-constrained matcher with parallel implementations for model-based vision

    Science.gov (United States)

    Anhalt, David J.; Raney, Steven; Severson, William E.

    1992-02-01

    The matching component of a model-based vision system hypothesizes one-to-one correspondences between 2D image features and locations on the 3D model. As part of Wright Laboratory's ARAGTAP program [a synthetic aperture radar (SAR) object recognition program], we developed a matcher that searches for feature matches based on the hypothesized object type and aspect angle. Search is constrained by the presumed accuracy of the hypothesized aspect angle and scale. These constraints reduce the search space for matches, thus improving match performance and quality. The algorithm is presented and compared with a matcher based on geometric hashing. Parallel implementations on commercially available shared memory MIMD machines, distributed memory MIMD machines, and SIMD machines are presented and contrasted.

  18. Constraining the top-Higgs sector of the Standard Model Effective Field Theory

    CERN Document Server

    Cirigliano, V; de Vries, J; Mereghetti, E

    2016-01-01

    Working in the framework of the Standard Model Effective Field Theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy CP-conserving and CP-violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both CP-even and CP-odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model Effective Field Theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing m...

  19. An Efficient Constrained Model Predictive Control Algorithm Based on Approximate Computation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The on-line computational burden related to model predictive control (MPC) of large-scale constrained systems hampers its real-time applications and limits it to slow dynamic process with moderate number of inputs. To avoid this, an efficient and fast algorithm based on aggregation optimization is proposed in this paper. It only optimizes the current control action at time instant k, while other future control sequences in the optimization horizon are approximated off-line by the linear feedback control sequence, so the on-line optimization can be converted into a low dimensional quadratic programming problem. Input constraints can be well handled in this scheme. The comparable performance is achieved with existing standard model predictive control algorithm. Simulation results well demonstrate its effectiveness.

  20. A Dynamic Economic Dispatch Model Incorporating Wind Power Based on Chance Constrained Programming

    Directory of Open Access Journals (Sweden)

    Wushan Cheng

    2014-12-01

    Full Text Available In order to maintain the stability and security of the power system, the uncertainty and intermittency of wind power must be taken into account in economic dispatch (ED problems. In this paper, a dynamic economic dispatch (DED model based on chance constrained programming is presented and an improved particle swarm optimization (PSO approach is proposed to solve the problem. Wind power is regarded as a random variable and is included in the chance constraint. New formulation of up and down spinning reserve constraints are presented under expectation meaning. The improved PSO algorithm combines a feasible region adjustment strategy with a hill climbing search operation based on the basic PSO. Simulations are performed under three distinct test systems with different generators. Results show that both the proposed DED model and the improved PSO approach are effective.

  1. Thermodynamically Constrained Averaging Theory (TCAT) Two-Phase Flow Model: Derivation, Closure, and Simulation Results

    Science.gov (United States)

    Weigand, T. M.; Miller, C. T.; Dye, A. L.; Gray, W. G.; McClure, J. E.; Rybak, I.

    2015-12-01

    The thermodynamically constrained averaging theory (TCAT) has been usedto formulate general classes of porous medium models, including newmodels for two-fluid-phase flow. The TCAT approach provides advantagesthat include a firm connection between the microscale, or pore scale,and the macroscale; a thermodynamically consistent basis; explicitinclusion of factors such as interfacial areas, contact angles,interfacial tension, and curvatures; and dynamics of interface movementand relaxation to an equilibrium state. In order to render the TCATmodel solvable, certain closure relations are needed to relate fluidpressure, interfacial areas, curvatures, and relaxation rates. In thiswork, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instancefrom a hierarchy of two-fluid-phase flow models that emerge from thetheory. We show the closure problem that must be solved. Using recentresults from high-resolution microscale simulations, we advance a set ofclosure relations that produce a closed model. Lastly, we solve the model using a locally conservative numerical scheme and compare the TCAT model to the traditional model.

  2. Source apportionment of urban air pollutants using constrained receptor models with a priori profile information.

    Science.gov (United States)

    Liao, Ho-Tang; Yau, Yu-Chen; Huang, Chun-Sheng; Chen, Nathan; Chow, Judith C; Watson, John G; Tsai, Shih-Wei; Chou, Charles C-K; Wu, Chang-Fu

    2017-08-01

    Exposure to air pollutants such as volatile organic compounds (VOCs) and fine particulate matter (PM2.5) are associated with adverse health effects. This study applied multiple time resolution data of hourly VOCs and 24-h PM2.5 to a constrained Positive Matrix Factorization (PMF) model for source apportionment in Taipei, Taiwan. Ninety-two daily PM2.5 samples and 2208 hourly VOC measurements were collected during four seasons in 2014 and 2015. With some a priori information, we used different procedures to constrain retrieved factors toward realistic sources. A total of nine source factors were identified as: natural gas/liquefied petroleum gas (LPG) leakage, solvent use/industrial process, contaminated marine aerosol, secondary aerosol/long-range transport, oil combustion, traffic related, evaporative gasoline emission, gasoline exhaust, and soil dust. Results showed that solvent use/industrial process was the largest contributor (19%) to VOCs while the largest contributor to PM2.5 mass was secondary aerosol/long-range transport (57%). A robust regression analysis showed that secondary aerosol was mostly contributed by regional transport related factor (25%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A model to constrain 21st Century sea level rise from tidewater glaciers

    Science.gov (United States)

    Ultee, E.; Bassis, J. N.

    2016-12-01

    Tidewater glaciers are large contributors to global mean sea level rise, both in their own right (e.g. Columbia Glacier, Alaska) and as outlets of the continental ice sheets. Tidewater glaciers are channeled through narrow fjords ( 100-101 km) that are difficult to resolve in continental-scale ice sheet models, hindering sea level rise projections. Moreover, tidewater glaciers respond to difficult-to-resolve local variables, such as precipitation rate and ocean forcing. Here we present a "flowline" model for networks of tidewater glaciers based on Nye's perfect plastic approximation, and we describe how it can be applied to generate constraints on the glaciological contribution to 21st Century sea level rise. The model can be forced with modeled or observed surface mass balance, or coupled with an ice sheet model upstream. Several test cases from Alaska and Greenland demonstrate our model's performance, and we illustrate how adjustments to the sole model parameter can constrain the decade- to century-scale ice flux to the ocean.

  4. A multidimensional item response model : Constrained latent class analysis using the Gibbs sampler and posterior predictive checks

    NARCIS (Netherlands)

    Hoijtink, H; Molenaar, IW

    1997-01-01

    In this paper it will be shown that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. The parameters of this latent class model will be estimated using an application of the Gibbs sampler. It will be illust

  5. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    Energy Technology Data Exchange (ETDEWEB)

    Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, the Pennsylvania State University, University Park, PA 16802, Unites States (United States); Mahdi, Adam, E-mail: amahdi@ncsu.edu [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Majda, Andrew J., E-mail: jonjon@cims.nyu.edu [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States)

    2014-01-15

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.

  6. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  7. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Science.gov (United States)

    Wang, Yin; Jiang, Han

    2014-01-01

    We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy. PMID:24803950

  8. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  9. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  10. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  11. GMIN: A computerized chemical equilibrium model using a constrained minimization of the Gibbs free energy

    Energy Technology Data Exchange (ETDEWEB)

    Felmy, A.R.

    1990-04-01

    This document is a user's manual and technical reference for the computerized chemical equilibrium model GMIN. GMIN calculates the chemical composition of systems composed of pure solid phases, solid-solution phases, gas phases, adsorbed phases, and the aqueous phase. In the aqueous phase model, the excess solution free energy is modeled by using the equations developed by PITZER and his coworkers, which are valid to high ionic strengths. The Davies equation can also be used. Activity coefficients for nonideal soild-solution phases are calculated using parameters of polynomial expansion in mole fraction of the excess free energy of mixing. The free energy of adsorbed phase species is described by the triple-layer site-binding model. The mathematical algorithm incorporated into GMIN is based upon a constrained minimization of the Gibbs free energy. This algorithm is numerically stable and reliably converges to a free energy minimum. The data base for GMIN contains all standard chemical potentials and Pitzer ion-interaction parameters necessary to model the system Na-K-Ca-Mg-H-Cl-SO{sub 4}-CO{sub 2}-B(OH){sub 4}-H{sub 2}0 at 25{degrees}C.

  12. Data-constrained Coronal Mass Ejections in a Global Magnetohydrodynamics Model

    Science.gov (United States)

    Jin, M.; Manchester, W. B.; van der Holst, B.; Sokolov, I.; Tóth, G.; Mullinix, R. E.; Taktakishvili, A.; Chulaki, A.; Gombosi, T. I.

    2017-01-01

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO/LASCO, and STEREO/COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).

  13. Data assimilation constrains new connections and components in a complex, eukaryotic circadian clock model

    Science.gov (United States)

    Pokhilko, Alexandra; Hodge, Sarah K; Stratford, Kevin; Knox, Kirsten; Edwards, Kieron D; Thomson, Adrian W; Mizuno, Takeshi; Millar, Andrew J

    2010-01-01

    Circadian clocks generate 24-h rhythms that are entrained by the day/night cycle. Clock circuits include several light inputs and interlocked feedback loops, with complex dynamics. Multiple biological components can contribute to each part of the circuit in higher organisms. Mechanistic models with morning, evening and central feedback loops have provided a heuristic framework for the clock in plants, but were based on transcriptional control. Here, we model observed, post-transcriptional and post-translational regulation and constrain many parameter values based on experimental data. The model's feedback circuit is revised and now includes PSEUDO-RESPONSE REGULATOR 7 (PRR7) and ZEITLUPE. The revised model matches data in varying environments and mutants, and gains robustness to parameter variation. Our results suggest that the activation of important morning-expressed genes follows their release from a night inhibitor (NI). Experiments inspired by the new model support the predicted NI function and show that the PRR5 gene contributes to the NI. The multiple PRR genes of Arabidopsis uncouple events in the late night from light-driven responses in the day, increasing the flexibility of rhythmic regulation. PMID:20865009

  14. Introducing COZIGAM: An R Package for Unconstrained and Constrained Zero-Inflated Generalized Additive Model Analysis

    Directory of Open Access Journals (Sweden)

    Hai Liu

    2010-10-01

    Full Text Available Zero-inflation problem is very common in ecological studies as well as other areas. Nonparametric regression with zero-inflated data may be studied via the zero-inflated generalized additive model (ZIGAM, which assumes that the zero-inflated responses come from a probabilistic mixture of zero and a regular component whose distribution belongs to the 1-parameter exponential family. With the further assumption that the probability of non-zero-inflation is some monotonic function of the mean of the regular component, we propose the constrained zero-inflated generalized additive model (COZIGAM for analyzingzero-inflated data. When the hypothesized constraint obtains, the new approach provides a unified framework for modeling zero-inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We have developed an R package COZIGAM which contains functions that implement an iterative algorithm for fitting ZIGAMs and COZIGAMs to zero-inflated data basedon the penalized likelihood approach. Other functions included in the package are useful for model prediction and model selection. We demonstrate the use of the COZIGAM package via some simulation studies and a real application.

  15. Long-term dynamics simulation: Modeling requirements

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S.; Kar, P.K.; Rogers, G.J.; Morison, G.K. (Ontario Hydro, Toronto, ON (Canada))

    1989-12-01

    This report details the required performance and modelling capabilities of a computer program intended for the study of the long term dynamics of power systems. Following a general introduction which outlines the need for long term dynamic studies, the modelling requirements for the conduct of such studies is discussed in detail. Particular emphasis is placed on models for system elements not normally modelled in power system stability programs, which will have a significant impact in the long term time frame of minutes to hours following the initiating disturbance. The report concludes with a discussion of the special computational and programming requirements for a long term stability program. 43 refs., 36 figs.

  16. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  17. Constraining Type Ia supernova models: SN 2011fe as a test case

    CERN Document Server

    Roepke, F K; Seitenzahl, I R; Pakmor, R; Sim, S A; Taubenberger, S; Ciaraldi-Schoolmann, F; Hillebrandt, W; Aldering, G; Antilogus, P; Baltay, C; Benitez-Herrera, S; Bongard, S; Buton, C; Canto, A; Cellier-Holzem, F; Childress, M; Chotard, N; Copin, Y; Fakhouri, H K; Fink, M; Fouchez, D; Gangler, E; Guy, J; Hachinger, S; Hsiao, E Y; Juncheng, C; Kerschhaggl, M; Kowalski, M; Nugent, P; Paech, K; Pain, R; Pecontal, E; Pereira, R; Perlmutter, S; Rabinowitz, D; Rigault, M; Runge, K; Saunders, C; Smadja, G; Suzuki, N; Tao, C; Thomas, R C; Tilquin, A; Wu, C

    2012-01-01

    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs-realizations of explosion models appropriate for two of the most widely-discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is s...

  18. CONSTRAINING TYPE Ia SUPERNOVA MODELS: SN 2011fe AS A TEST CASE

    Energy Technology Data Exchange (ETDEWEB)

    Roepke, F. K.; Seitenzahl, I. R. [Institut fuer Theoretische Physik und Astrophysik, Universitaet Wuerzburg, Am Hubland, D-97074 Wuerzburg (Germany); Kromer, M.; Taubenberger, S.; Ciaraldi-Schoolmann, F.; Hillebrandt, W.; Benitez-Herrera, S. [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching (Germany); Pakmor, R. [Heidelberger Institut fuer Theoretische Studien, Schloss-Wolfsbrunnenweg 35, 69118 Heidelberg (Germany); Sim, S. A. [Research School of Astronomy and Astrophysics, Australian National University, Mount Stromlo Observatory, Cotter Road, Weston Creek, ACT 2611 (Australia); Aldering, G.; Childress, M.; Fakhouri, H. K. [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, Universite Pierre et Marie Curie Paris 6, Universite Paris Diderot Paris 7, CNRS-IN2P3, 4 place Jussieu, 75252 Paris Cedex 05 (France); Baltay, C. [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States); Buton, C. [Physikalisches Institut, Universitaet Bonn, Nussallee 12, 53115 Bonn (Germany); Chotard, N.; Copin, Y. [Universite de Lyon, F-69622, Lyon (France); Universite de Lyon 1, Villeurbanne (France); CNRS/IN2P3, Institut de Physique Nucleaire de Lyon (France); and others

    2012-05-01

    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs (WDs)-realizations of explosion models appropriate for two of the most widely discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is still not justified. Observations at late epochs, however, hold promise for discriminating the explosion scenarios in a straightforward way, as a nucleosynthesis effect leads to differences in the {sup 55}Co production. SN 2011fe is close enough to be followed sufficiently long to study this effect.

  19. A simulation-based fuzzy chance-constrained programming model for optimal groundwater remediation under uncertainty

    Science.gov (United States)

    He, L.; Huang, G. H.; Lu, H. W.

    2008-12-01

    In this study a simulation-based fuzzy chance-constrained programming (SFCCP) model is developed based on possibility theory. The model is solved through an indirect search approach which integrates fuzzy simulation, artificial neural network and simulated annealing techniques. This approach has the advantages of: (1) handling simulation and optimization problems under uncertainty associated with fuzzy parameters, (2) providing additional information (i.e. possibility of constraint satisfaction) indicating that how likely one can believe the decision results, (3) alleviating computational burdens in the optimization process, and (4) reducing the chances of being trapped in local optima. The model is applied to a petroleum-contaminated aquifer located in western Canada for supporting the optimal design of groundwater remediation systems. The model solutions provide optimal groundwater pumping rates for the 3, 5 and 10 years of pumping schemes. It is observed that the uncertainty significantly affects the remediation strategies. To mitigate such impacts, additional cost is required either for increased pumping rate or for reinforced site characterization.

  20. A multi-model approach to constrain emissions from an urban-industrial complex

    Science.gov (United States)

    Super, Ingrid; Denier van der Gon, Hugo; Visschedijk, Antoon; Moerman, Marcel; Chen, Huilin; van der Molen, Michiel; Peters, Wouter

    2016-04-01

    Greenhouse gas observations around cities can be used to independently quantify fossil fuel emissions and monitor the effectiveness of emission reduction policies. In this study we show that a relatively small network measuring CO2 and CO concentrations in combination with high-resolution modelling can constrain the emissions of a heterogeneous urban-industrial landscape. We apply a unique and promising combination of a plume and grid model. We use the WRF/Chem grid model to simulate concentrations at 1km horizontal resolution and to quantify biogenic CO2 fluxes. A Gaussian plume model is used to better represent the concentrations downwind of industrial stacks. Our network of (semi-)urban and rural sites detects fossil fuel signals from distinct source regions in the urban port area of Rotterdam. The impact of biogenic fluxes on the observed CO2 concentrations is in the order of several ppm due to the large fraction of grassland in the footprints of the measurement sites. We will also show that monitoring multiple combustion tracers helps to identify source regions, including the inner city, sea port, glasshouses and local biogenic activity.

  1. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  2. Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial

    Directory of Open Access Journals (Sweden)

    A. Robinson

    2011-04-01

    Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ18O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.

  3. The post-orogenic evolution of the Northeast Greenland Caledonides constrained from apatite fission track analysis and inverse geodynamic modelling

    DEFF Research Database (Denmark)

    Pedersen, Vivi Kathrine; Nielsen, S.B.; Gallagher, Kerry

    2012-01-01

    or characteristic trends relative to mean track length. Using these new data and inverse geodynamic modelling, we constrain the evolution in the area since the orogenic collapse of the Caledonides. Exhumation histories are inferred using a uniform stretching model, incorporating variable rates of erosion...... or deposition, and thermal histories are found by solving the one-dimensional transient conduction–advection heat equation. These thermal histories are used with the observed fission track data to constrain acceptable strain rate histories and exhumation paths. The results suggest that rifting has been focused...

  4. The distribution of deep source rocks in the GS Sag: joint MT-gravity modeling and constrained inversion

    Science.gov (United States)

    Shi, Yan-Ling; Hu, Zu-Zhi; Huang, Wen-Hui; Wei, Qiang; Zhang, Sheng; Meng, Cui-Xian; Ji, Lian-Sheng

    2016-09-01

    The coal-bearing strata of the deep Upper Paleozoic in the GS Sag have high hydrocarbon potential. Because of the absence of seismic data, we use electromagnetic (MT) and gravity data jointly to delineate the distribution of deep targets based on well logging and geological data. First, a preliminary geological model is established by using three-dimensional (3D) MT inversion results. Second, using the formation density and gravity anomalies, the preliminary geological model is modified by interactive inversion of the gravity data. Then, we conduct MT-constrained inversion based on the modified model to obtain an optimal geological model until the deviations at all stations are minimized. Finally, the geological model and a seismic profile in the middle of the sag is analysed. We determine that the deep reflections of the seismic profile correspond to the Upper Paleozoic that reaches thickness up to 800 m. The processing of field data suggests that the joint MT-gravity modeling and constrained inversion can reduce the multiple solutions for single geophysical data and thus improve the recognition of deep formations. The MT-constrained inversion is consistent with the geological features in the seismic section. This suggests that the joint MT and gravity modeling and constrained inversion can be used to delineate deep targets in similar basins.

  5. Constraining a Martian general circulation model with the MAVEN/IUVS observations in the thermosphere

    Science.gov (United States)

    Moeckel, Chris; Medvedev, Alexander; Nakagawa, Hiromu; Evans, Scott; Kuroda, Takeshi; Hartogh, Paul; Yiğit, Erdal; Jain, Sonal; Lo, Daniel; Schneider, Nicholas M.; Jakosky, Bruce

    2016-10-01

    The recent measurements of the number density of atomic oxygen by Mars Atmosphere and Volatile EvolutioN/ Imaging UltraViolet Spectrograph (MAVEN/IUVS) have been implemented for the first time into a global circulation model to quantify the effect on the Martian thermosphere. The number density has been converted to 1D volume mixing ratio and this profile is compared to the atomic oxygen scenarios based on chemical models. Simulations were performed with the Max Planck Institute Martian General Circulation Model (MPI-MGCM). The simulations closely emulate the conditions at the time of observations. The results are compared to the IUVS-measured CO2 number density and temperature above 130 km to gain knowledge of the processes in the upper atmosphere and further constrain them in MGCMs. The presentation will discuss the role and importance in the thermosphere of the following aspects: (a) impact of the observed atomic oxygen, (b) 27-day solar cycle variations, (c) varying dust load in the lower atmosphere, and (d) gravity waves.

  6. Constrained Molecular Dynamics Modeling of Dielectric Response in Polar Polyethylene Analogs and Poly(vinylidene flouride)

    Science.gov (United States)

    Calame, Jeffrey

    2013-03-01

    A simplified molecular dynamics formalism for polymers, having united atoms with constrained bond lengths and bond angles along the backbone but allowing torsional motion, has been developed to model the dielectric response and ferroelectricity in polymers with permanent dipoles. Analytic relations existing on the backbone geometry and associated dihedral motion allow elimination of many dot and cross product evaluations. Also, constraint error correcting forces, symplectic integration with velocity prediction, random force excitation with damping and a momentum-conserving thermostat, and rapid neighbor list and long range force computation allow efficient computation and time steps as large as 20 fs to enable the study of relatively long time scale dielectric phenomena. Studies are performed on non-polar polyethylene for benchmarking, followed by a model system (polar polyethylene) which retains the molecular structure, dihedral potentials, and non-bonded interactions of polyethylene, except artificial partial charges are placed on the united atoms. The modeling is extended to poly(vinylidene fluoride) by changes to the molecular structure, potentials, and charges. Heterogeneous systems containing crystalline and amorphous arrangements of polymer chains are studied. Work supported by the U.S. Office of Naval Research.

  7. Neutrino fluxes from constrained minimal supersymmetric standard model lightest supersymmetric particle annihilations in the Sun

    CERN Document Server

    Ellis, John; Savage, Christopher; Spanos, Vassilis C

    2010-01-01

    We evaluate the neutrino fluxes to be expected from neutralino LSP annihilations inside the Sun, within the minimal supersymmetric extension of the Standard Model with supersymmetry-breaking scalar and gaugino masses constrained to be universal at the GUT scale (the CMSSM). We find that there are large regions of typical CMSSM $(m_{1/2}, m_0)$ planes where the LSP density inside the Sun is not in equilibrium, so that the annihilation rate may be far below the capture rate. We show that neutrino fluxes are dependent on the solar model at the 20% level, and adopt the AGSS09 model of Serenelli et al. for our detailed studies. We find that there are large regions of the CMSSM $(m_{1/2}, m_0)$ planes where the capture rate is not dominated by spin-dependent LSP-proton scattering, e.g., at large $m_{1/2}$ along the CMSSM coannihilation strip. We calculate neutrino fluxes above various threshold energies for points along the coannihilation/rapid-annihilation and focus-point strips where the CMSSM yields the correct ...

  8. Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.

    Science.gov (United States)

    Starcevic, Vladan

    2017-03-17

    Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.

  9. Action potential generation in an anatomically constrained model of medial superior olive axons.

    Science.gov (United States)

    Lehnert, Simon; Ford, Marc C; Alexandrova, Olga; Hellmundt, Franziska; Felmy, Felix; Grothe, Benedikt; Leibold, Christian

    2014-04-09

    Neurons in the medial superior olive (MSO) encode interaural time differences (ITDs) with sustained firing rates of >100 Hz. They are able to generate such high firing rates for several hundred milliseconds despite their extremely low-input resistances of only few megaohms and high synaptic conductances in vivo. The biophysical mechanisms by which these leaky neurons maintain their excitability are not understood. Since action potentials (APs) are usually assumed to be generated in the axon initial segment (AIS), we analyzed anatomical data of proximal MSO axons in Mongolian gerbils and found that the axon diameter is <1 μm and the internode length is ∼100 μm. Using a morphologically constrained computational model of the MSO axon, we show that these thin axons facilitate the excitability of the AIS. However, for ongoing high rates of synaptic inputs the model generates a substantial fraction of APs in its nodes of Ranvier. These distally initiated APs are mediated by a spatial gradient of sodium channel inactivation and a strong somatic current sink. The model also predicts that distal AP initiation increases the dynamic range of the rate code for ITDs.

  10. How wild is your model fire? Constraining WRF-Chem wildfire smoke simulations with satellite observations

    Science.gov (United States)

    Fischer, E. V.; Ford, B.; Lassman, W.; Pierce, J. R.; Pfister, G.; Volckens, J.; Magzamen, S.; Gan, R.

    2015-12-01

    Exposure to high concentrations of particulate matter (PM) present during acute pollution events is associated with adverse health effects. While many anthropogenic pollution sources are regulated in the United States, emissions from wildfires are difficult to characterize and control. With wildfire frequency and intensity in the western U.S. projected to increase, it is important to more precisely determine the effect that wildfire emissions have on human health, and whether improved forecasts of these air pollution events can mitigate the health risks associated with wildfires. One of the challenges associated with determining health risks associated with wildfire emissions is that the low spatial resolution of surface monitors means that surface measurements may not be representative of a population's exposure, due to steep concentration gradients. To obtain better estimates of ambient exposure levels for health studies, a chemical transport model (CTM) can be used to simulate the evolution of a wildfire plume as it travels over populated regions downwind. Improving the performance of a CTM would allow the development of a new forecasting framework that could better help decision makers estimate and potentially mitigate future health impacts. We use the Weather Research and Forecasting model with online chemistry (WRF-Chem) to simulate wildfire plume evolution. By varying the model resolution, meteorology reanalysis initial conditions, and biomass burning inventories, we are able to explore the sensitivity of model simulations to these various parameters. Satellite observations are used first to evaluate model skill, and then to constrain the model results. These data are then used to estimate population-level exposure, with the aim of better characterizing the effects that wildfire emissions have on human health.

  11. An algorithmic calibration approach to identify globally optimal parameters for constraining the DayCent model

    Energy Technology Data Exchange (ETDEWEB)

    Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.

    2015-02-01

    he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.

  12. Sensitivity-based finite element model updating using constrained optimization with a trust region algorithm

    Science.gov (United States)

    Bakir, Pelin Gundes; Reynders, Edwin; De Roeck, Guido

    2007-08-01

    The use of changes in dynamic system characteristics to detect damage has received considerable attention during the last years. Within this context, FE model updating technique, which belongs to the class of inverse problems in classical mechanics, is used to detect, locate and quantify damage. In this study, a sensitivity-based finite element (FE) model updating scheme using a trust region algorithm is developed and implemented in a complex structure. A damage scenario is applied on the structure in which the stiffness values of the beam elements close to the beam-column joints are decreased by stiffness reduction factors. A worst case and complex damage pattern is assumed such that the stiffnesses of adjacent elements are decreased by substantially different stiffness reduction factors. The objective of the model updating is to minimize the differences between the eigenfrequency and eigenmodes residuals. The updating parameters of the structure are the stiffness reduction factors. The changes of these parameters are determined iteratively by solving a nonlinear constrained optimization problem. The FE model updating algorithm is also tested in the presence of two levels of noise in simulated measurements. In all three cases, the updated MAC values are above 99% and the relative eigenfrequency differences improve substantially after model updating. In cases without noise and with moderate levels of noise; detection, localization and quantification of damage are successfully accomplished. In the case with substantially noisy measurements, detection and localization of damage are successfully realized. Damage quantification is also promising in the presence of high noise as the algorithm can still predict 18 out of 24 damage parameters relatively accurately in that case.

  13. Constraining the dipolar magnetic field of M82 X-2 by the accretion model

    CERN Document Server

    Chen, Wen-Cong

    2016-01-01

    Recently, ultraluminous X-ray source (ULX) M82 X-2 has been identified to be an accreting neutron star, which has a $P=1.37$ s spin period, and is spinning up at a rate $\\dot{P}=-2.0\\times 10^{-10}~\\rm s\\,s^{-1}$. Interestingly, its isotropic X-ray luminosity $L_{\\rm iso}=1.8\\times 10^{40}~\\rm erg\\,s^{-1}$ during outbursts is 100 times the Eddington limit for a $1.4~\\rm M_{\\odot}$ neutron star. In this Letter, based on the standard accretion model we attempt to constrain the dipolar magnetic field of the pulsar in ULX M82 X-2. Our calculations indicate that the accretion rate at the magnetospheric radius must be super-Eddington during outbursts. To support such a super-Eddington accretion, a relatively high multipole field ($\\ga 10^{13}$ G) near the surface of the accretor is invoked to produce an accreting gas column. However, our constraint shows that the surface dipolar magnetic field of the pulsar should be in the range of $1.0-3.5\\times 10^{12}$ G. Therefore, our model supports that the neutron star in U...

  14. Commitment Versus Persuasion in the Three-Party Constrained Voter Model

    Science.gov (United States)

    Mobilia, Mauro

    2013-04-01

    In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.

  15. Constraining the top-Higgs sector of the standard model effective field theory

    Science.gov (United States)

    Cirigliano, V.; Dekens, W.; de Vries, J.; Mereghetti, E.

    2016-08-01

    Working in the framework of the Standard Model effective field theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization-group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy C P -conserving and C P -violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both C P -even and C P -odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model effective field theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.

  16. Repeating Earthquakes Confirm and Constrain Long-Term Acceleration of Aseismic Slip Preceding the M9 Tohoku-Oki Earthquake

    Science.gov (United States)

    Mavrommatis, A. P.; Segall, P.; Uchida, N.; Johnson, K. M.

    2015-12-01

    Changes in the recurrence intervals of repeating earthquakes offshore northern Japan in the period 1996 to 2011 imply long-term acceleration of aseismic slip preceding the 2011 M9 Tohoku-oki earthquake, confirming a previous inference from completely independent GPS data (Mavrommatis et al., 2014, GRL). We test whether sequences of repeating earthquakes exhibit a statistically significant monotonic trend in recurrence interval by applying the nonparametric Mann-Kendall test. Offshore northern Tohoku, all sequences that pass the test exhibit decelerating recurrence, consistent with decaying afterslip following the 1994 M7.7 Sanriku earthquake. On the other hand, offshore south-central Tohoku, all sequences that pass the test exhibit accelerating recurrence, consistent with long-term accelerating creep prior to the 2011 Μ9 earthquake. Using a physical model of repeating earthquake recurrence, we produce time histories of cumulative slip on the plate interface. After correcting for afterslip following several M~7 earthquakes in the period 2003-2011, we find that all but one sequence exhibit statistically significant slip accelerations. Offshore south-central Tohoku, the estimated slip acceleration is on average 2.9 mm/yr2, consistent with the range of 2.6-4.0 mm/yr2 estimated from independent GPS data (Mavrommatis et al., 2014). From a joint inversion of GPS and seismicity data, we infer that a substantial portion of the plate interface experienced accelerating creep in the 15 years prior to the M9 Tohoku-oki earthquake. The large slip area of the Tohoku-oki earthquake appears to be partly bounded by accelerating creep, implying that most of the rupture area of the M9 earthquake was either locked or creeping at a constant rate during this time period. Accelerating creep would result in increasing stressing rate on locked parts of the interface, thereby promoting nucleation of moderate to large earthquakes.

  17. Constraining Early Cenozoic exhumation of the British Isles with vertical profile modelling

    Science.gov (United States)

    Doepke, Daniel; Cogné, Nathan; Chew, David

    2016-04-01

    Despite decades of research is the Early Cenozoic exhumation history of Ireland and Britain still poorly understood and subject to contentious debate (e.g., Davis et al., 2012 and subsequent comments). One reason for this debate is the difficultly of constraining the evolution of onshore parts of the British Isles in both time and space. The paucity of Mesozoic and Cenozoic onshore outcrops makes direct analysis of this time span difficult. Furthermore, Ireland and Britain are situated at a passive margin, where the amount of post-rift exhumation is generally very low. Classical thermochronological tools are therefore near the edge of their resolution and make precise dating of post-rift cooling events challenging. In this study we used the established apatite fission track and (U-Th-Sm)/He techniques, but took advantage of the vertical profile approach of Gallagher et al. (2005) implemented in the QTQt modelling package (Gallagher, 2012), to better constrain the thermal histories. This method allowed us to define the geographical extent of a Late Cretaceous - Early Tertiary cooling event and to show that it was centered around the Irish Sea. Thus, we argue that this cooling event is linked to the underplating of hot material below the crust centered on the Irish Sea (Jones et al., 2002; Al-Kindi et al., 2003), and demonstrate that such conclusion would have been harder, if not impossible, to draw by modelling the samples individually without the use of the vertical profile approach. References Al-Kindi, S., White, N., Sinha, M., England, R., and Tiley, R., 2003, Crustal trace of a hot convective sheet: Geology, v. 31, no. 3, p. 207-210. Davis, M.W., White, N.J., Priestley, K.F., Baptie, B.J., and Tilmann, F.J., 2012, Crustal structure of the British Isles and its epeirogenic consequences: Geophysical Journal International, v. 190, no. 2, p. 705-725. Jones, S.M., White, N., Clarke, B.J., Rowley, E., and Gallagher, K., 2002, Present and past influence of the Iceland

  18. Infants' and adults' looking behavior does not indicate perceptual distraction for constrained modelled actions - An eye-tracking study.

    Science.gov (United States)

    Buttelmann, David; Schieler, Andy; Wetzel, Nicole; Widmann, Andreas

    2017-05-01

    When observing a novel action, infants pay attention to the model's constraints when deciding whether to imitate this action or not. Gergely et al. (2002) found that more 14-month-olds copied a model's use of her head to operate a lamp when she used her head while her hands were free than when she had to use this means because it was the only means available to her (i.e., her hands were occupied). The perceptional distraction account (Beisert et al., 2012) claims that differences between conditions in terms of the amount of attention infants paid to the modeled action caused the differences in infants' performance between conditions. In order to investigate this assumption we presented 14-month-olds (N=34) with an eye-tracking paradigm and analyzed their looking behavior when observing the head-touch demonstration in the two original conditions. Subsequently, they had the chance to operate the apparatus themselves, and we measured their imitative responses. In order to explore the perceptional processes taking place in this paradigm in adulthood, we also presented adults (N=31) with the same task. Apart from the fact that we did not replicate the findings in imitation with our participants, the eye-tracking results do not support the perceptional distraction account: infants did not statistically differ - not even tendentially - in their amount of looking at the modeled action in both conditions. Adults also did not statistically differ in their looking at the relevant action components. However, both groups predominantly observed the relevant head action. Consequently, infants and adults do not seem to attend differently to constrained and unconstrained modelled actions. Copyright © 2017. Published by Elsevier Inc.

  19. The value of stream level observations to constrain low-parameter hydrologic models

    Science.gov (United States)

    Seibert, J.; Vis, M.; Pool, S.

    2014-12-01

    While conceptual runoff models with a low number of model parameters are useful tools to capture the hydrological catchment functioning, these models usually rely on model calibration, which makes their use in ungauged basins challenging. One approach might be to take at least a few measurements. Recent studies demonstrated that few streamflow measurements, representing data that could be measured with limited efforts in an ungauged basin, might be helpful to constrain runoff models for simulations in ungauged basins. While in these previous studies we assumed that few streamflow measurements were taken, obviously it would also be reasonable to measure stream levels. Several approaches could be used in practice for such stream level observations: water level loggers have become less expensive and easier to install; stream levels will in the near future be increasingly available from satellite remote sensing resulting in evenly space time series; community-based approaches (e.g., crowdhydrology.org), finally, can offer level observations at irregular time intervals. Here we present a study where a runoff model (the HBV model) was calibrated for 600+ gauged basins in the US assuming that only a subset of the data was available. We pretended that only stream level observations at different time intervals, representing the temporal resolution of the different observation approaches mentioned before, were available. The model, which was calibrated based on these data subsets, was then evaluated on the full observed streamflow record. Our results indicate that streamlevel data alone already can provide surprisingly good model simulation results in humid catchments, whereas in arid catchments some form of quantitative information (streamflow observation or regional average value) is needed to obtain good results. These results are encouraging for hydrological observations in data scarce regions as level observations are much easier to obtain than streamflow observations

  20. Baby Skyrme models without a potential term

    CERN Document Server

    Ashcroft, Jennifer; Krusch, Steffen

    2015-01-01

    We develop a one-parameter family of static baby Skyrme models that do not require a potential term to admit topological solitons. This is a novel property as all currently known baby Skyrme models must contain a potential term in order to have stable soliton solutions, though the Skyrme model does not require this. Our new models satisfy an energy bound that is linear in terms of the topological charge and can be saturated in an extreme limit. They also satisfy a virial theorem that is shared by the Skyrme model. We calculate the solitons of our new models numerically and observe that their form depends significantly on the choice of parameter. In one extreme, we find compactons whilst at the other there is a scale invariant model in which solitons can be obtained exactly as solutions to a Bogomolny equation. We provide an initial investigation into these solitons and compare them with the baby Skyrmions of other models.

  1. Constraining soil C cycling with strategic, adaptive action for data and model reporting

    Science.gov (United States)

    Harden, J. W.; Swanston, C.; Hugelius, G.

    2015-12-01

    Regional to global carbon assessments include a variety of models, data sets, and conceptual structures. This includes strategies for representing the role and capacity of soils to sequester, release, and store carbon. Traditionally, many soil carbon data sets emerged from agricultural missions focused on mapping and classifying soils to enhance and protect production of food and fiber. More recently, soil carbon assessments have allowed for more strategic measurement to address the functional and spatially explicit role that soils play in land-atmosphere carbon exchange. While soil data sets are increasingly inter-comparable and increasingly sampled to accommodate global assessments, soils remain poorly constrained or understood with regard to their role in spatio-temporal variations in carbon exchange. A more deliberate approach to rapid improvement in our understanding involves a community-based activity than embraces both a nimble data repository and a dynamic structure for prioritization. Data input and output can be transparent and retrievable as data-derived products, while also being subjected to rigorous queries for merging and harmonization into a searchable, comprehensive, transparent database. Meanwhile, adaptive action groups can prioritize data and modeling needs that emerge through workshops, meta-data analyses or model testing. Our continual renewal of priorities should address soil processes, mechanisms, and feedbacks that significantly influence global C budgets and/or significantly impact the needs and services of regional soil resources that are impacted by C management. In order to refine the International Soil Carbon Network, we welcome suggestions for such groups to be led on topics such as but not limited to manipulation experiments, extreme climate events, post-disaster C management, past climate-soil interactions, or water-soil-carbon linkages. We also welcome ideas for a business model that can foster and promote idea and data sharing.

  2. Gravitational signature and apparent mass changes in Amundsen Embayment caused by low viscosity GIA model constrained by rapid bedrock displacement

    Science.gov (United States)

    Barletta, V. R.; Bevis, M.; Smith, B. E.; Wilson, T. J.; Willis, M. J.; Brown, A.; Bordoni, A.; Khan, S. A.; Smalley, R., Jr.; Kendrick, E. C.; Konfal, S. A.; Caccamise, D.; Aster, R.; Chaput, J. A.; Heeszel, D.; Wiens, D.; Lloyd, A. J.

    2015-12-01

    The Amundsen Embayment sector of West Antarctica is experiencing some of the fastest sustained bedrock uplift rates in the world. These motions, recorded by the Antarctic GPS Network (ANET), cannot be explained in terms of the earth's elastic response to contemporary ice loss, and the residues are far too rapid to be explained using traditional GIA models. We use 13 years of very high resolution DEM-derived ice mass change fields over the Amundsen sector to compute the elastic signal and remove it from the observed geodetic time series. We obtain a very large residual - up to 5 times larger than the computed elastic response. Low or very low mantle viscosities are expected in this area based on existing heat flow estimates, seismic velocity anomalies, thin crust, and active volcanism, all of which are associated with geologically recent rifting. We hypothesize that the rapid crustal displacement manifests a low viscosity short-time-scale response to post- Little Ice Age ice mass changes, including ice losses developed in the last decade or so. A plausible ice history for the last hundred years is made by using the actual measurements from 2002 to 2014, and 25% of the present day melting rate before 2002. We then simulate and fit the bedrock displacement - both vertical and horizontal - with a spherical compressible viscoelastic Earth model having a low viscosity shallow upper mantle. We show that we can constrain the shallow upper mantle viscosity very well and also explain most of the signal (amplitude and direction) by using 2 x10^18 Pa s. However we are not able to precisely constrain the thickness of the lithosphere (the preferred thickness is more than 50 km, quite thick for that region) or ice history. By using our preferred set up (earth model + ice history) we compute the GIA gravitational signature and convert it in equivalent superficial water density (see figure) that can be directly used to correct the mass changes observed by GRACE.For the Amundsen

  3. Constraining the rheology of the lithosphere and upper mantle with geodynamic inverse modelling

    Science.gov (United States)

    Kaus, Boris; Baumann, Tobias

    2016-04-01

    The rheology of the lithosphere is of key importance for the physics of the lithosphere. Yet, it is probably the most uncertain parameter in geodynamics as experimental rock rheologies have to be extrapolated to geological conditions and as existing geophysical methods such as EET estimations make simplifying assumptions about the structure of the lithosphere. In many geologically interesting regions, such as the Alps, Andes or Himalaya, we actually have a significant amount of data already and as a result the geometry of the lithosphere is quite well constrained. Yet, knowing the geometry is only one part of the story, as we also need to have an accurate knowledge on the rheology and temperature structure of the lithosphere. Here, we discuss a relatively new method that we developed over the last few years, which is called geodynamic inversion. The basic principle of the method is simple: we compile available geophysical data into a realistic geometric model of the lithosphere and incorporate that into a thermo-mechanical numerical model of lithospheric deformation. In order to do so, we have to know the temperature structure, the density and the (nonlinear) rheological parameters for various parts of the lithosphere (upper crust, upper mantle, etc.). Rather than fixing these parameters we assume that they are all uncertain. This is used as a priori information to formulate a Bayesian inverse problem that employs topography, gravity, horizontal and vertical surface velocities to invert for the unknown material parameters and temperature structure. In order to test the general methodology, we first perform a geodynamic inversion of a synthetic forward model of intra-oceanic subduction with known parameters. This requires solving an inverse problem with 14-16 parameters, depending on whether temperature is assumed to be known or not. With the help of a massively parallel direct-search combined with a Markov Chain Monte Carlo method, solving the inverse problem

  4. Using data-driven model-brain mappings to constrain formal models of cognition.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings.

  5. Supporting the search for the CEP location with nonlocal PNJL models constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Contrera, Gustavo A. [IFLP, UNLP, CONICET, Facultad de Ciencias Exactas, La Plata (Argentina); Gravitation, Astrophysics and Cosmology Group, FCAyG, UNLP, La Plata (Argentina); CONICET, Buenos Aires (Argentina); Grunfeld, A.G. [CONICET, Buenos Aires (Argentina); Comision Nacional de Energia Atomica, Departamento de Fisica, Buenos Aires (Argentina); Blaschke, David [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Joint Institute for Nuclear Research, Moscow Region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation)

    2016-08-15

    We investigate the possible location of the critical endpoint in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift Δ P/T {sup 4} vs. T/T{sub c} with lattice QCD results shows a better agreement when Lorentzian form factors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study, albeit presently performed at the mean-field level, supports the very existence of a critical point and favors its location within a region that is accessible in experiments at the NICA accelerator complex. (orig.)

  6. Image Retrieval Based on Multiview Constrained Nonnegative Matrix Factorization and Gaussian Mixture Model Spectral Clustering Method

    Directory of Open Access Journals (Sweden)

    Qunyi Xie

    2016-01-01

    Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.

  7. Constraining Intra-cluster Gas Models with AMiBA13

    CERN Document Server

    Molnar, Sandor M; Birkinshaw, Mark; Bryan, Greg; Haiman, Zoltan; Hearn, Nathan; Ho, Paul T P; Huang, Chih-Wei L; Koch, Patrick M; Liao, Yu-Wei V; Linh, Kai-Yang; Liuh, Guo-Chin; Nishioka, Hiroaki; Wang, Fu-Cheng; Wu, Jiun-Huei P; Astronomy, Institute of; Astrophysics,; Sinica, Academia; 23-141, P O Box; 106, Taipei; Taiwan,; ROC,; Laboratory, H H Wills Physics; Bristol, University of; Ave, Tyndall; 1TL, Bristol BS8; UK,; Astronomy, Department of; University, Columbia; Street, 550 West 120th; York, New; 10027, NY; Flashes, ASC/Alliances Center for Astrophysical Thermonuclear; Chicago, University of; 60637, Chicago IL; Astrophysics, Harvard-Smithsonian Center for; Street, 60 Garden; Cambridge,; 02138, MA; Physics, Department of; Astrophysics, Institute of; University, National Taiwan; 10617, Taipei; Taiwan,; ROC,; Physics, Department of; University, Tamkang; 251-37,; Tamsui,; County, Taipei; Taiwan,; ROC,

    2010-01-01

    Clusters of galaxies have been used extensively to determine cosmological parameters. A major difficulty in making best use of Sunyaev--Zel'dovich (SZ) and X-ray observations of clusters for cosmology is that using X-ray observations it is difficult to measure the temperature distribution and therefore determine the density distribution in individual clusters of galaxies out to the virial radius. Observations with the new generation of SZ instruments are a promising alternative approach. We use clusters of galaxies drawn from high-resolution adaptive mesh refinement (AMR) cosmological simulations to study how well we should be able to constrain the large-scale distribution of the intra-cluster gas (ICG) in individual massive relaxed clusters using AMiBA in its configuration with 13 1.2-m diameter dishes (AMiBA13) along with X-ray observations. We show that non-isothermal beta models provide a good description of the ICG in our simulated relaxed clusters. We use simulated X-ray observations to estimate the qua...

  8. Searching for the CEP location with nonlocal PNJL models constrained by Lattice QCD

    CERN Document Server

    Contrera, Gustavo A; Blaschke, David

    2016-01-01

    We investigate the possible location of the critical end point in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift $\\Delta P / T^4$ vs $T/T_c$ with lattice QCD results shows a better agreement when Lorentzian formfactors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences on results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study supports the existence of a critical point and favors for its location the region $69.9~{\\rm MeV}\\le T_{\\rm CEP} \\le...

  9. Constraining the properties of AGN host galaxies with Spectral Energy Distribution modeling

    CERN Document Server

    Ciesla, L; Georgakakis, A; Bernhard, E; Mitchell, P D; Buat, V; Elbaz, D; Floc'h, E Le; Lacey, C G; Magdis, G E; Xilouris, M

    2015-01-01

    [abridged] We use the latest release of CIGALE, a galaxy SED fitting model relying on energy balance, to study the influence of an AGN in estimating both the SFR and stellar mass in galaxies, as well as the contribution of the AGN to the power output of the host. Using the galaxy formation SAM GALFORM, we create mock galaxy SEDs using realistic star formation histories (SFH) and add an AGN of Type 1, Type 2, or intermediate type whose contribution to the bolometric luminosity can be variable. We perform an SED fitting of these catalogues with CIGALE assuming three different SFHs: a single- and double-exponentially-decreasing, and a delayed SFH. Constraining thecontribution of an AGN to the LIR (fracAGN) is very challenging for fracAGN<20%, with uncertainties of ~5-30% for higher fractions depending on the AGN type, while FIR and sub-mm are essential. The AGN power has an impact on the estimation of $M_*$ in Type 1 and intermediate type AGNs but has no effect for galaxies hosting Type 2 AGNs. We find that i...

  10. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model

    Science.gov (United States)

    Daout, S.; Barbot, S.; Peltzer, G.; Doin, M.-P.; Liu, Z.; Jolivet, R.

    2016-11-01

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  11. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  12. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2014-12-01

    Full Text Available Reconstructions of late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurement on tree rings, ice cores, and varved lake sediments. Considerable advances may be achievable if time uncertain proxies could be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches to accounting for time uncertainty are generally limited to repeating the reconstruction using each of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here we demonstrate how Bayesian Hierarchical climate reconstruction models can be augmented to account for time uncertain proxies. Critically, while a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age-model probabilities decreases uncertainty in the climate reconstruction, as compared with the current de-facto standard of sampling over all age models, provided there is sufficient information from other data sources in the region of the time-uncertain proxy. This approach can readily be generalized to non-layer counted proxies, such as those derived from marine sediments.

  13. Estimating the Properties of Hard X-Ray Solar Flares by Constraining Model Parameters

    Science.gov (United States)

    Ireland, J.; Tolbert, A. K.; Schwartz, R. A.; Holman, G. D.; Dennis, B. R.

    2013-01-01

    We wish to better constrain the properties of solar flares by exploring how parameterized models of solar flares interact with uncertainty estimation methods. We compare four different methods of calculating uncertainty estimates in fitting parameterized models to Ramaty High Energy Solar Spectroscopic Imager X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method is also based on the difference between the data and the model, but instead uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the Geostationary Operational Environmental Satellite X1.3 class flare of 2005 January 19, and the other from the X4.8 flare of 2002 July 23.We find that the four methods give approximately the same uncertainty estimates for the 2005 January 19 spectral fit parameters, but lead to very different uncertainty estimates for the 2002 July 23 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent results that can differ greatly depending on the shape of the hypersurface. The hypersurface arising from the 2005 January 19 analysis is consistent with a normal distribution; therefore, the assumptions behind the three non- Bayesian uncertainty estimation methods are satisfied and similar estimates are found. The 2002 July 23 analysis shows that the hypersurface is not consistent with a normal distribution, indicating that the assumptions behind the three non-Bayesian uncertainty estimation methods are not satisfied, leading to differing estimates of the uncertainty. We find that the shape of the hypersurface is crucial in understanding

  14. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  15. Optimisation of the Population Monte Carlo algorithm: Application to constraining isocurvature models with cosmic microwave background data

    CERN Document Server

    Moodley, Darell

    2015-01-01

    We optimise the parameters of the Population Monte Carlo algorithm using numerical simulations. The optimisation is based on an efficiency statistic related to the number of samples evaluated prior to convergence, and is applied to a D-dimensional Gaussian distribution to derive optimal scaling laws for the algorithm parameters. More complex distributions such as the banana and bimodal distributions are also studied. We apply these results to a cosmological parameter estimation problem that uses CMB anisotropy data from the WMAP nine-year release to constrain a six parameter adiabatic model and a fifteen parameter admixture model, consisting of correlated adiabatic and isocurvature perturbations. In the case of the adiabatic model and the admixture model we find respective degradation factors of three and twenty, relative to the optimal Gaussian case, due to degeneracies in the underlying parameter space. The WMAP nine-year data constrain the admixture model to have an isocurvature fraction of at most $36.3 \\...

  16. Virtual Models of Long-Term Care

    Science.gov (United States)

    Phenice, Lillian A.; Griffore, Robert J.

    2012-01-01

    Nursing homes, assisted living facilities and home-care organizations, use web sites to describe their services to potential consumers. This virtual ethnographic study developed models representing how potential consumers may understand this information using data from web sites of 69 long-term-care providers. The content of long-term-care web…

  17. Inhomogeneous Universe Models with Varying Cosmological Term

    CERN Document Server

    Chimento, L P; Chimento, Luis P.; Pavon, Diego

    1998-01-01

    The evolution of a class of inhomogeneous spherically symmetric universe models possessing a varying cosmological term and a material fluid, with an adiabatic index either constant or not, is studied.

  18. Using data-driven model-brain mappings to constrain formal models of cognition

    NARCIS (Netherlands)

    Borst, Jelmer P; Nijboer, Menno; Taatgen, Niels A; van Rijn, Hedderik; Anderson, John R

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping f

  19. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    , constraints are introduced to ensure the conformity of the estimates to a gien global structure. Hierarchical models are then utilized as a tool to ccomodate global model uncertainties via parametric variabilities within the structure. The global parameters and their associated uncertainties are estimated...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality.......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...

  20. Chance-Constrained Model for Real-Time Reservoir Operation Using Drought Duration Curve

    Science.gov (United States)

    Takeuchi, Kuniyoshi

    1986-04-01

    The seasonal drought duration curve (SDDC) ƒβ (m|τ) is defined as a deterministic equivalent of an average streamflow over an m-day period starting from date τ with probability of failure being β. This curve provides an estimate of a sum of inflows over m days starting from date τ in a T ( = 1/β)-year drought. The reservoir system considered is a single-purpose reservoir already in service. The demand pattern is predetermined, and the percentage of deficit in meeting the demand (supply cut) is left to operators' judgement. A chance-constrained model was developed for such a system. The model determined the percentage of supply cut on date τ in such the way that the probability of exhaustion of reservoir storage Sτ+m at the beginning of date τ+m was maintained less than a given constant βm for all 1 ≤ m ≤ M, i.e., Prob {Sτ+m ≤ 0} ≤ βm, m = 1, 2, …, M, where M is the number of days in the future to be considered to make a current decision on date τ, and βm are a given set of allowable exhaustion probability selected from an indifferent preference curve between reservoir exhaustion probability β and anticipated time to its occurrence, m. The reservoir operation rule thus developed was named as DDC rule curves and demonstrated satisfactorily operational through a simulation study of the Fukuoka drought case during 1978-1979.

  1. Group-constrained sparse fMRI connectivity modeling for mild cognitive impairment identification.

    Science.gov (United States)

    Wee, Chong-Yaw; Yap, Pew-Thian; Zhang, Daoqiang; Wang, Lihong; Shen, Dinggang

    2014-03-01

    Emergence of advanced network analysis techniques utilizing resting-state functional magnetic resonance imaging (R-fMRI) has enabled a more comprehensive understanding of neurological disorders at a whole-brain level. However, inferring brain connectivity from R-fMRI is a challenging task, particularly when the ultimate goal is to achieve good control-patient classification performance, owing to perplexing noise effects, curse of dimensionality, and inter-subject variability. Incorporating sparsity into connectivity modeling may be a possible solution to partially remedy this problem since most biological networks are intrinsically sparse. Nevertheless, sparsity constraint, when applied at an individual level, will inevitably cause inter-subject variability and hence degrade classification performance. To this end, we formulate the R-fMRI time series of each region of interest (ROI) as a linear representation of time series of other ROIs to infer sparse connectivity networks that are topologically identical across individuals. This formulation allows simultaneous selection of a common set of ROIs across subjects so that their linear combination is best in estimating the time series of the considered ROI. Specifically, l 1-norm is imposed on each subject to filter out spurious or insignificant connections to produce sparse networks. A group-constraint is hence imposed via multi-task learning using a l 2-norm to encourage consistent non-zero connections across subjects. This group-constraint is crucial since the network topology is identical for all subjects while still preserving individual information via different connectivity values. We validated the proposed modeling in mild cognitive impairment identification and promising results achieved demonstrate its superiority in disease characterization, particularly greater sensitivity to early stage brain pathologies. The inferred group-constrained sparse network is found to be biologically plausible and is highly

  2. Bone architecture adaptations after spinal cord injury: impact of long-term vibration of a constrained lower limb

    Science.gov (United States)

    Dudley-Javoroski, S.; Petrie, M. A.; McHenry, C. L.; Amelon, R. E.; Saha, P. K.

    2015-01-01

    Summary This study examined the effect of a controlled dose of vibration upon bone density and architecture in people with spinal cord injury (who eventually develop severe osteoporosis). Very sensitive computed tomography (CT) imaging revealed no effect of vibration after 12 months, but other doses of vibration may still be useful to test. Introduction The purposes of this report were to determine the effect of a controlled dose of vibratory mechanical input upon individual trabecular bone regions in people with chronic spinal cord injury (SCI) and to examine the longitudinal bone architecture changes in both the acute and chronic state of SCI. Methods Participants with SCI received unilateral vibration of the constrained lower limb segment while sitting in a wheelchair (0.6g, 30 Hz, 20 min, three times weekly). The opposite limb served as a control. Bone mineral density (BMD) and trabecular micro-architecture were measured with high-resolution multi-detector CT. For comparison, one participant was studied from the acute (0.14 year) to the chronic state (2.7 years). Results Twelve months of vibration training did not yield adaptations of BMD or trabecular micro-architecture for the distal tibia or the distal femur. BMD and trabecular network length continued to decline at several distal femur sub-regions, contrary to previous reports suggesting a “steady state” of bone in chronic SCI. In the participant followed from acute to chronic SCI, BMD and architecture decline varied systematically across different anatomical segments of the tibia and femur. Conclusions This study supports that vibration training, using this study’s dose parameters, is not an effective antiosteoporosis intervention for people with chronic SCI. Using a high-spatial-resolution CT methodology and segmental analysis, we illustrate novel longitudinal changes in bone that occur after spinal cord injury. PMID:26395887

  3. Dynamically constrained uncertainty for the Kalman filter covariance in the presence of model error

    Science.gov (United States)

    Grudzien, Colin; Carrassi, Alberto; Bocquet, Marc

    2017-04-01

    The forecasting community has long understood the impact of dynamic instability on the uncertainty of predictions in physical systems and this has led to innovative filtering design to take advantage of the knowledge of process models. The advantages of this combined approach to filtering, including both a dynamic and statistical understanding, have included dimensional reductions and robust feature selection in the observational design of filters. In the context of a perfect models we have shown that the uncertainty in prediction is damped along the directions of stability and the support of the uncertainty conforms to the dominant system instabilities. Our current work likewise demonstrates this constraint on the uncertainty for systems with model error, specifically, - we produce analytical upper bounds on the uncertainty in the stable, backwards orthogonal Lyapunov vectors in terms of the local Lyapunov exponents and the scale of the additive noise. - we demonstrate that for systems with model noise, the least upper bound on the uncertainty depends on the inverse relationship of the leading Lyapunov exponent and the observational certainty. - we numerically compute the invariant scaling factor of the model error which determines the asymptotic uncertainty. This dynamic scaling of model error is identifiable independently of the noise and is computable directly in terms of the system's dynamic invariants -- in this way the physical process itself may mollify the growth of modelling errors. For systems with strongly dissipative behaviour, we demonstrate that the growth of the uncertainty can be confined to the unstable-neutral modes independently of the filtering process, and we connect the observational design to take advantage of a dynamic characteristic of the filtering error.

  4. Constraining the GRB-Magnetar Model by Means of the Galactic Pulsar Population

    NARCIS (Netherlands)

    N. Rea; M. Gullón; J.A. Pons; R. Perna; M.G. Dainotti; J.A. Miralles; D.F. Torres

    2015-01-01

    A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <105 s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GR

  5. Maximum likelihood estimation for constrained parameters of multinomial distributions - Application to Zipf-Mandelbrot models

    NARCIS (Netherlands)

    Izsak, F.

    2006-01-01

    A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main dif��?culty involved in computing the likelihood function is the precise and fast determination of the multinomial coef��?cients. For this the coef��?cients are

  6. An inexact double-sided chance-constrained model for air quality management in Nanshan District, Shengzhen, China

    Science.gov (United States)

    Shao, Liguo; Xu, Ye; Huang, Guohe

    2014-12-01

    In this study, an inexact double-sided fuzzy-random-chance-constrained programming (IDSFRCCP) model was developed for supporting air quality management of the Nanshan District of Shenzhen, China, under uncertainty. IDSFRCCP is an integrated model incorporating interval linear programming and double-sided fuzzy-random-chance-constrained programming models. It can express uncertain information as both fuzzy random variables and discrete intervals. The proposed model was solved based on the stochastic and fuzzy chance-constrained programming techniques and an interactive two-step algorithm. The air quality management system of Nanshan District, including one pollutant, six emission sources, six treatment technologies and four receptor sites, was used to demonstrate the applicability of the proposed method. The results indicated that the IDSFRCCP was capable of helping decision makers to analyse trade-offs between system cost and risk of constraint violation. The mid-range solutions tending to lower bounds with moderate αh and qi values were recommended as decision alternatives owing to their robust characteristics.

  7. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  8. Goldstino and sgoldstino in microscopic models and the constrained superfields formalism

    CERN Document Server

    Antoniadis, I; Ghilencea, D M

    2012-01-01

    We examine the exact relation between the superconformal symmetry breaking chiral superfield (X) and the goldstino superfield in microscopic models of an arbitrary Kahler potential (K) and in the presence of matter fields. We investigate the decoupling of the massive sgoldstino and scalar matter fields and the offshell/onshell-SUSY expressions of their superfields in terms of the fermions composites. For general K of two superfields, we study the properties of the superfield X after integrating out these scalar fields, to show that in the infrared it satisfies (offshell) the condition $X^3=0$ and $X^2\

  9. 3D spherical models of Martian mantle convection constrained by melting history

    Science.gov (United States)

    Sekhar, Pavithra; King, Scott D.

    2014-02-01

    While most of Tharsis rise was in place by end of the Noachian period, at least one volcano on Tharsis swell (Arsia Mons) has been active within the last 2 Ma. This places an important constraint on mantle convection and on the thermal evolution of Mars. The existence of recent volcanism on Mars implies that adiabatic decompression melting and, hence, upwelling convective flow in the mantle remains important on Mars at present. The thermal history on Mars can be constrained by the history of melt production, specifically generating sufficient melt in the first billion years of the planets history to produce Tharsis rise as well as present day melt to explain recent volcanism. In this work, mantle convection simulations were performed using finite element code CitcomS in a 3D sphere starting from a uniformly hot mantle and integrating forward in time for the age of the solar system. We implement constant and decaying radioactive heat sources; and vary the partitioning of heat sources between the crust and mantle, and consider decreasing core-mantle boundary temperature and latent heat of melting. The constant heat source calculations produce sufficient melt to create Tharsis early in Martian history and continue to produce significant melt to the present. Calculations with decaying radioactive heat sources generate excessive melt in the past, except when all the radiogenic elements are in the crust, and none produce melt after 2 Gyr. Producing a degree-1 or degree-2 structure may not be pivotal to explain the Tharsis rise: we present multi-plume models where not every plume produces melt. The Rayleigh number controls the timing of the first peak of volcanism while late-stage volcanism is controlled more by internal mantle heating. Decreasing the Rayleigh number increases the lithosphere thickness (i.e., depth), and increasing lithosphere thickness increases the mean mantle temperature. Increasing pressure reduces melt production while increasing temperature

  10. Spinal 5-HT7 Receptors and Protein Kinase A Constrain Intermittent Hypoxia-Induced Phrenic Long-term Facilitation

    OpenAIRE

    Hoffman, M. S.; Mitchell, G. S.

    2013-01-01

    Phrenic long-term facilitation (pLTF) is a form of serotonin-dependent respiratory plasticity induced by acute intermittent hypoxia (AIH). pLTF requires spinal Gq protein-coupled serotonin-2 receptor (5-HT2) activation, new synthesis of brain-derived neurotrophic factor (BDNF) and activation of its high-affinity receptor, TrkB. Intrathecal injections of selective agonists for Gs protein-coupled receptors (adenosine 2A and serotonin-7; 5-HT7) also induce long-lasting phrenic motor facilitation...

  11. Mercury's thermo-chemical evolution from numerical models constrained by Messenger observations

    Science.gov (United States)

    Tosi, N.; Breuer, D.; Plesa, A. C.; Wagner, F.; Laneuville, M.

    2012-04-01

    The Messenger spacecraft, in orbit around Mercury for almost one year, has been delivering a great deal of new information that is changing dramatically our understanding of the solar system's innermost planet. Tracking data of the Radio Science experiment yielded improved estimates of the first coefficients of the gravity field that permit to determine the normalized polar moment of inertia of the planet (C/MR2) and the ratio of the moment of inertia of the mantle to that of the whole planet (Cm/C). These two parameters provide a strong constraint on the internal mass distribution and, in particular, on the core mass fraction. With C/MR2 = 0.353 and Cm/C = 0.452 [1], interior structure models predict a core radius as large as 2000 km [2], leaving room for a silicate mantle shell with a thickness of only ~ 400 km, a value significantly smaller than that of 600 km usually assumed in parametrized [3] as well as in numerical models of Mercury's mantle dynamics and evolution [4]. Furthermore, the Gamma-Ray Spectrometer measured the surface abundance of radioactive elements, revealing, besides uranium and thorium, the presence of potassium. The latter, being moderately volatile, rules out traditional formation scenarios from highly refractory materials, favoring instead a composition not much dissimilar from a chondritic model. Considering a 400 km thick mantle, we carry out a large series of 2D and 3D numerical simulations of the thermo-chemical evolution of Mercury's mantle. We model in a self-consistent way the formation of crust through partial melting using Lagrangian tracers to account for the partitioning of radioactive heat sources between mantle and crust and variations of thermal conductivity. Assuming the relative surface abundance of radiogenic elements observed by Messenger to be representative of the bulk mantle composition, we attempt at constraining the degree to which uranium, thorium and potassium are concentrated in the silicate mantle through a broad

  12. Constraining the dynamics of the water budget at high spatial resolution in the world's water towers using models and remote sensing data; Snake River Basin, USA

    Science.gov (United States)

    Watson, K. A.; Masarik, M. T.; Flores, A. N.

    2016-12-01

    Mountainous, snow-dominated basins are often referred to as the water towers of the world because they store precipitation in seasonal snowpacks, which gradually melt and provide water supplies to downstream communities. Yet significant uncertainties remain in terms of quantifying the stores and fluxes of water in these regions as well as the associated energy exchanges. Constraining these stores and fluxes is crucial for advancing process understanding and managing these water resources in a changing climate. Remote sensing data are particularly important to these efforts due to the remoteness of these landscapes and high spatial variability in water budget components. We have developed a high resolution regional climate dataset extending from 1986 to the present for the Snake River Basin in the northwestern USA. The Snake River Basin is the largest tributary of the Columbia River by volume and a critically important basin for regional economies and communities. The core of the dataset was developed using a regional climate model, forced by reanalysis data. Specifically the Weather Research and Forecasting (WRF) model was used to dynamically downscale the North American Regional Reanalysis (NARR) over the region at 3 km horizontal resolution for the period of interest. A suite of satellite remote sensing products provide independent, albeit uncertain, constraint on a number of components of the water and energy budgets for the region across a range of spatial and temporal scales. For example, GRACE data are used to constrain basinwide terrestrial water storage and MODIS products are used to constrain the spatial and temporal evolution of evapotranspiration and snow cover. The joint use of both models and remote sensing products allows for both better understanding of water cycle dynamics and associated hydrometeorologic processes, and identification of limitations in both the remote sensing products and regional climate simulations.

  13. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus

    2013-07-01

    In terrestrial biosphere models, key biochemical controls on carbon uptake by vegetation canopies are typically assigned fixed literature-based values for broad categories of vegetation types although in reality significant spatial and temporal variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based retrievals of leaf chlorophyll for estimating leaf photosynthetic capacity and for constraining model simulations of water and carbon fluxes. © 2013 IEEE.

  14. Increased accuracy in mineral and hydrogeophysical modelling of HTEM data via detailed description of system transfer function and constrained inversion

    DEFF Research Database (Denmark)

    Viezzoli, Andrea; Christiansen, Anders Vest; Auken, Esben

    , of the low pass filters present in any system, and of waveform repetition. Low pass filters affect the shallow to intermediate part of the model, whereas the waveform repetition the deeper part. Results show how filters and waveform are parameters, like frame altitude, Tx-Rx timing and so on, that need...... to be taken into account and modeled correctly during inversion of HTEM data. We then present an application of this approach on real VTEM data from an exploration survey. The results from constrained inversion of the VTEM, compared with borehole information and with other modeling methodologies, show its...

  15. Constraining the Schwarzschild-de Sitter Solution in Models of Modified Gravity

    CERN Document Server

    Iorio, Lorenzo; Radicella, Ninfa; Saridakis, Emmanuel N

    2016-01-01

    The Schwarzschild-de Sitter (SdS) solution exists in the large majority of modified gravity theories, as expected, and in particular the effective cosmological constant is determined by the specific parameters of the given theory. We explore the possibility to use future extended radio-tracking data from the currently ongoing New Horizons mission in the outskirts peripheries of the Solar System, at about 40 au, in order to constrain this effective cosmological constant, and thus to impose constrain on each scenario's parameters. We investigate some of the recently most studied modified gravities, namely $f(R)$ and $f(T)$ theories, dRGT massive gravity, and Ho\\v{r}ava-Lifshitz gravity, and we show that New Horizons mission may bring an improvement of one-two orders of magnitude with respect to the present bounds from planetary orbital dynamics.

  16. Constraining mantle convection models with palaeomagnetic reversals record and numerical dynamos

    Science.gov (United States)

    Choblet, G.; Amit, H.; Husson, L.

    2016-11-01

    We present numerical models of mantle dynamics forced by plate velocities history in the last 450 Ma. The lower-mantle rheology and the thickness of a dense basal layer are systematically varied and several initial procedures are considered for each case. For some cases, the dependence on the mantle convection vigour is also examined. The resulting evolution of the CMB heat flux is analysed in terms of criteria to promote or inhibit reversals inferred from numerical dynamos. Most models present a rather dynamic lower mantle with the emergence of two thermochemical piles towards present-day. Only a small minority of models present two stationary piles over the last 450 Myr. At present-day, the composition field obtained in our models is found to correlate better with tomography than the temperature field. In addition, the temperature field immediately at the CMB (and thus the heat flux pattern) slightly differs from the average temperature field over the 100-km thick mantle layer above it. The evolution of the mean CMB heat flux or of the amplitude of heterogeneity seldom presents the expected correlation with the evolution of the palaeomagnetic reversal frequency suggesting these effects cannot explain the observations. In contrast, our analysis favours `inertial control' on the geodynamo associated with polar cooling and in some cases break of Taylor columns in the outer core as sources of increased reversal frequency. Overall, the most likely candidates among our mantle dynamics models involve a viscosity increase in the mantle equal or smaller than 30: models with a discontinuous viscosity increase at the transition zone tend to agree better at present-day with observations of seismic tomography, but models with a gradual viscosity increase agree better with some of the criteria proposed to affect reversal frequency.

  17. Constraining mantle convection models with paleomagnetic reversals record and numerical dynamos

    Science.gov (United States)

    Choblet, G.; Amit, H.; Husson, L.

    2016-09-01

    We present numerical models of mantle dynamics forced by plate velocities history in the last 450 Ma. The lower mantle rheology and the thickness of a dense basal layer are systematically varied and several initial procedures are considered for each case. For some cases, the dependence on the mantle convection vigor is also examined. The resulting evolution of the CMB heat flux is analyzed in terms of criteria known to promote or inhibit reversals inferred from numerical dynamos. Most models present a rather dynamic lower mantle with the emergence of two thermochemical piles towards present-day. Only a small minority of models present two stationary piles over the last 450 Myr. At present-day, the composition field obtained in our models is found to correlate better with tomography than the temperature field. In addition, the temperature field immediately at the CMB (and thus the heat flux pattern) slightly differs from the average temperature field over the 100-km thick mantle layer above it. The evolution of the mean CMB heat flux or of the amplitude of heterogeneities seldom presents the expected correlation with the evolution of the paleomagnetic reversal frequency suggesting these effects cannot explain the observations. In contrast, our analysis favors either 'inertial control' on the geodynamo associated to polar cooling and in some cases break of Taylor columns in the outer core as sources of increased reversal frequency. Overall, the most likely candidates among our mantle dynamics models involve a viscosity increase in the mantle equal or smaller than 30: models with a discontinuous viscosity increase at the transition zone tend to agree better at present-day with observations of seismic tomography, but models with a gradual viscosity increase agree better with some of the criteria proposed to affect reversal frequency.

  18. Term structure modeling and asymptotic long rate

    NARCIS (Netherlands)

    Yao, Y.

    1999-01-01

    This paper examines the dynamics of the asymptotic long rate in three classes of term structure models. It shows that, in a frictionless and arbitrage-free market, the asymptotic long rate is a non-decreasing process. This gives an alternative proof of the same result of Dybvig et al. (Dybvig, P.H.,

  19. A Theory of Cramer-Rao Bounds for Constrained Parametric Models

    Science.gov (United States)

    2010-01-01

    1980. [65] M. Spivak , Calculus on Manifolds. Reading, MA: Addison-Wesley, 1965. [66] Petre Stoica, Thomas L. Marzetta, “Parameter estimation problems...multivariable calculus and, specifically, the use of the implicit function theorem. The reward for this approach will be a seamless presentation of statistical...inference involving the constrained Cramér-Rao bound. From the perspective of multivariable calculus , the constraint f (θ) = 0 effec- tively restricts

  20. Constraining local 3-D models of the saturated-zone, Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Barr, G.E.; Shannon, S.A.

    1994-04-01

    A qualitative three-dimensional analysis of the saturated zone flow system was performed for a 8 km {times} 8 km region including the potential Yucca Mountain repository site. Certain recognized geologic features of unknown hydraulic properties were introduced to assess the general response of the flow field to these features. Two of these features, the Solitario Canyon fault and the proposed fault in Drill Hole Wash, appear to constrain flow and allow calibration.

  1. Constraining local 3-D models of the saturated-zone, Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Barr, G.E.; Shannon, S.A. [Sandia National Labs., Albuquerque, NM (United States)

    1994-12-31

    A qualitative three-dimensional analysis of the saturated zone flow system was performed for a 8 km {times} 8 km region including the potential Yucca Mountain repository site. Certain recognized geologic features of unknown hydraulic properties were introduced to assess the general response of the flow field to these features. Two of these features, the Solitario Canyon fault and the proposed fault in Drill Hole Wash, appear to constrain flow and allow calibration.

  2. Using noble gas tracers to constrain a groundwater flow model with recharge elevations: A novel approach for mountainous terrain

    Science.gov (United States)

    Doyle, Jessica M.; Gleeson, Tom; Manning, Andrew H.; Mayer, K. Ulrich

    2015-10-01

    Environmental tracers provide information on groundwater age, recharge conditions, and flow processes which can be helpful for evaluating groundwater sustainability and vulnerability. Dissolved noble gas data have proven particularly useful in mountainous terrain because they can be used to determine recharge elevation. However, tracer-derived recharge elevations have not been utilized as calibration targets for numerical groundwater flow models. Herein, we constrain and calibrate a regional groundwater flow model with noble-gas-derived recharge elevations for the first time. Tritium and noble gas tracer results improved the site conceptual model by identifying a previously uncertain contribution of mountain block recharge from the Coast Mountains to an alluvial coastal aquifer in humid southwestern British Columbia. The revised conceptual model was integrated into a three-dimensional numerical groundwater flow model and calibrated to hydraulic head data in addition to recharge elevations estimated from noble gas recharge temperatures. Recharge elevations proved to be imperative for constraining hydraulic conductivity, recharge location, and bedrock geometry, and thus minimizing model nonuniqueness. Results indicate that 45% of recharge to the aquifer is mountain block recharge. A similar match between measured and modeled heads was achieved in a second numerical model that excludes the mountain block (no mountain block recharge), demonstrating that hydraulic head data alone are incapable of quantifying mountain block recharge. This result has significant implications for understanding and managing source water protection in recharge areas, potential effects of climate change, the overall water budget, and ultimately ensuring groundwater sustainability.

  3. Constraining stellar population models - I. Age, metallicity, and abundance pattern compilation for Galactic globular clusters

    CERN Document Server

    Roediger, Joel C; Graves, Genevieve; Schiavon, Ricardo

    2013-01-01

    We present an extenstive literature compilation of age, metallicity, and chemical abundance pattern information for the 41 Galactic globular clusters (GGCs) studied by Schiavon et al. (2005). Our compilation constitutes a notable improvement over previous similar work, particularly in terms of chemical abundances. Its primary purpose is to enable detailed evaluations of and refinements to stellar population synthesis models designed to recover the above information for unresolved stellar systems based on their integrated spectra. However, since the Schiavon sample spans a wide range of the known GGC parameter space, our compilation may also benefit investigations related to a variety of astrophysical endeavours, such as the early formation of the Milky Way, the chemical evolution of GGCs, and stellar evolution and nucleosynthesis. For instance, we confirm with our compiled data that the GGC system has a bimodal metallicity distribution and is uniformly enhanced in the alpha-elements. When paired with the ages...

  4. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  5. The Mechanism of Microearthquakes Related to a Gas Storage Using Differently Constrained Source Models: A Case Study of the Háje Location, Czech Republic

    Science.gov (United States)

    Jechumtálová, Zuzana; Šílený, Jan; Málek, Jiří

    2016-09-01

    The resolution of a source mechanism is investigated in terms of three differently constrained source models: the moment tensor, the shear-tensile crack source model, and the double couple source model. The moment tensor (MT) is an unconstrained description of a general dipole source; the shear-tensile crack (STC) represents a slip along a fault with an off-plane component and the double couple (DC) corresponds to a simple shear slip along a fault. The inversion of body wave amplitudes is applied on microseismic events located in the vicinity of underground gas storage Háje (Czech Republic) where volume changes in the source can be expected. The orientation of the simple shear fracture component is resolved almost always well, independently of the source model used. On the other hand, the non-shear components differ largely among the source models considered, from both the model definition and robustness of the inversion. A comparison of the inversion results for the three alternative source models permits an assessment of the reliability of the non-shear components retrieved. Application of the STC model to all events appears to be the most appropriate. The analysis confirms a shear slip for three events and a tensile fracturing for other three events.

  6. Exploring the biological consequences of conformational changes in aspartame models containing constrained analogues of phenylalanine.

    Science.gov (United States)

    Mollica, Adriano; Mirzaie, Sako; Costante, Roberto; Carradori, Simone; Macedonio, Giorgia; Stefanucci, Azzurra; Dvoracsko, Szabolcs; Novellino, Ettore

    2016-12-01

    The dipeptide aspartame (Asp-Phe-OMe) is a sweetener widely used in replacement of sucrose by food industry. 2',6'-Dimethyltyrosine (DMT) and 2',6'-dimethylphenylalanine (DMP) are two synthetic phenylalanine-constrained analogues, with a limited freedom in χ-space due to the presence of methyl groups in position 2',6' of the aromatic ring. These residues have shown to increase the activity of opioid peptides, such as endomorphins improving the binding to the opioid receptors. In this work, DMT and DMP have been synthesized following a diketopiperazine-mediated route and the corresponding aspartame derivatives (Asp-DMT-OMe and Asp-DMP-OMe) have been evaluated in vivo and in silico for their activity as synthetic sweeteners.

  7. Evolution in totally constrained models: Schr\\"odinger vs. Heisenberg pictures

    CERN Document Server

    Olmedo, Javier

    2016-01-01

    We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli's evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this manuscript is to further analyze this approach at the quantum level. Concretely, we will compare the (Schr\\"odinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this manuscript (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.

  8. Constraining the Physics of AM Canum Venaticorum Systems with the Accretion Disk Instability Model

    Science.gov (United States)

    Cannizzo, John K.; Nelemans, Gijs

    2015-01-01

    Recent work by Levitan et al. has expanded the long-term photometric database for AM CVn stars. In particular, their outburst properties are well correlated with orbital period and allow constraints to be placed on the secular mass transfer rate between secondary and primary if one adopts the disk instability model for the outbursts. We use the observed range of outbursting behavior for AM CVn systems as a function of orbital period to place a constraint on mass transfer rate versus orbital period. We infer a rate approximately 5 x 10(exp -9) solar mass yr(exp -1) ((P(sub orb)/1000 s)(exp -5.2)). We show that the functional form so obtained is consistent with the recurrence time-orbital period relation found by Levitan et al. using a simple theory for the recurrence time. Also, we predict that their steep dependence of outburst duration on orbital period will flatten considerably once the longer orbital period systems have more complete observations.

  9. Constraining the Physics of AM Canum Venaticorum Systems with the Accretion Disk Instability Model

    Science.gov (United States)

    Cannizzo, John K.; Nelemans, Gijs

    2015-01-01

    Recent work by Levitan et al. has expanded the long-term photometric database for AM CVn stars. In particular, their outburst properties are well correlated with orbital period and allow constraints to be placed on the secular mass transfer rate between secondary and primary if one adopts the disk instability model for the outbursts. We use the observed range of outbursting behavior for AM CVn systems as a function of orbital period to place a constraint on mass transfer rate versus orbital period. We infer a rate approximately 5 x 10(exp -9) solar mass yr(exp -1) ((P(sub orb)/1000 s)(exp -5.2)). We show that the functional form so obtained is consistent with the recurrence time-orbital period relation found by Levitan et al. using a simple theory for the recurrence time. Also, we predict that their steep dependence of outburst duration on orbital period will flatten considerably once the longer orbital period systems have more complete observations.

  10. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  11. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    Science.gov (United States)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or

  12. Computing arbitrage-free yields in multi-factor Gaussian shadow-rate term structure models

    OpenAIRE

    Marcel A. Priebsch

    2013-01-01

    This paper develops a method to approximate arbitrage-free bond yields within a term structure model in which the short rate follows a Gaussian process censored at zero (a "shadow-rate model" as proposed by Black, 1995). The censoring ensures that model-implied yields are constrained to be positive, but it also introduces non-linearity that renders standard bond pricing formulas inapplicable. In particular, yields are not linear functions of the underlying state vector as they are in affine t...

  13. Long-lived halocarbon trends and budgets from atmospheric chemistry modelling constrained with measurements in polar firn

    Directory of Open Access Journals (Sweden)

    P. Martinerie

    2009-01-01

    Full Text Available The budgets of seven halogenated gases (CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl4 and SF6 are studied by comparing measurements in polar firn air from two Arctic and three Antarctic sites, and simulation results of two numerical models: a 2-D atmospheric chemistry model and a 1-D firn diffusion model. The first one is used to calculate atmospheric concentrations from emission trends based on industrial inventories; the calculated concentration trends are used by the second one to produce depth concentration profiles in the firn. The 2-D atmospheric model is validated in the boundary layer by comparison with atmospheric station measurements, and vertically for CFC-12 by comparison with balloon and FTIR measurements. Firn air measurements provide constraints on historical atmospheric concentrations over the last century. Age distributions in the firn are discussed using a Green function approach. Finally, our results are used as input to a radiative model in order to evaluate the radiative forcing of our target gases. Multi-species and multi-site firn air studies allow to better constrain atmospheric trends. The low concentrations of all studied gases at the bottom of the firn, and their consistency with our model results confirm that their natural sources are insignificant. Our results indicate that the emissions, sinks and trends of CFC-11, CFC-12, CFC-113, CFC-115 and SF6 are well constrained, whereas it is not the case for CFC-114 and CCl4. Significant emission-dependent changes in the lifetimes of halocarbons destroyed in the stratosphere were obtained. Those result from the time needed for their transport from the surface where they are emitted to the stratosphere where they are destroyed. Efforts should be made to update and reduce the large uncertainties on CFC lifetimes.

  14. Long-lived halocarbon trends and budgets from atmospheric chemistry modelling constrained with measurements in polar firn

    Directory of Open Access Journals (Sweden)

    P. Martinerie

    2009-06-01

    Full Text Available The budgets of seven halogenated gases (CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl4 and SF6 are studied by comparing measurements in polar firn air from two Arctic and three Antarctic sites, and simulation results of two numerical models: a 2-D atmospheric chemistry model and a 1-D firn diffusion model. The first one is used to calculate atmospheric concentrations from emission trends based on industrial inventories; the calculated concentration trends are used by the second one to produce depth concentration profiles in the firn. The 2-D atmospheric model is validated in the boundary layer by comparison with atmospheric station measurements, and vertically for CFC-12 by comparison with balloon and FTIR measurements. Firn air measurements provide constraints on historical atmospheric concentrations over the last century. Age distributions in the firn are discussed using a Green function approach. Finally, our results are used as input to a radiative model in order to evaluate the radiative forcing of our target gases. Multi-species and multi-site firn air studies allow to better constrain atmospheric trends. The low concentrations of all studied gases at the bottom of the firn, and their consistency with our model results confirm that their natural sources are small. Our results indicate that the emissions, sinks and trends of CFC-11, CFC-12, CFC-113, CFC-115 and SF6 are well constrained, whereas it is not the case for CFC-114 and CCl4. Significant emission-dependent changes in the lifetimes of halocarbons destroyed in the stratosphere were obtained. Those result from the time needed for their transport from the surface where they are emitted to the stratosphere where they are destroyed. Efforts should be made to update and reduce the large uncertainties on CFC lifetimes.

  15. Constraining the GRB-magnetar model by means of the Galactic pulsar population

    CERN Document Server

    Rea, Nanda; Pons, Jose' A; Perna, Rosalba; Dainotti, Maria G; Miralles, Juan A; Torres, Diego F

    2015-01-01

    A large fraction of Gamma Ray Bursts (GRBs) displays an X-ray plateau phase within <10^{5} s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between January 2005 and August 2014. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type...

  16. Combining observational techniques to constrain convection in evolved massive star models

    CERN Document Server

    Georgy, C; Meynet, G

    2014-01-01

    Recent stellar evolution computations indicate that massive stars in the range ~ 20 - 30 Msun are located in the blue supergiant (BSG) region of the Hertzsprung-Russell diagram at two different stages of their life: immediately after the main sequence (MS, group 1) and during a blueward evolution after the red supergiant phase (group 2). From the observation of the pulsationnal properties of a subgroup of variable BSGs (alpha Cyg variables), one can deduce that these stars belongs to group 2. It is however difficult to simultaneously fit the observed surface abundances and gravity for these stars, and this allows to constrain the physical processes of chemical species transport in massive stars. We will show here that the surface abundances are extremely sensitive to the physics of convection, particularly the location of the intermediate convective shell that appears at the ignition of the hydrogen shell burning after the MS. Our results show that the use of the Ledoux criterion to determine the convective r...

  17. PCLR: phase-constrained low-rank model for compressive diffusion-weighted MRI.

    Science.gov (United States)

    Gao, Hao; Li, Longchuan; Zhang, Kai; Zhou, Weifeng; Hu, Xiaoping

    2014-11-01

    This work develops a compressive sensing approach for diffusion-weighted (DW) MRI. A phase-constrained low-rank (PCLR) approach was developed using the image coherence across the DW directions for efficient compressive DW MRI, while accounting for drastic phase changes across the DW directions, possibly as a result of eddy current, and rigid and nonrigid motions. In PCLR, a low-resolution phase estimation was used for removing phase inconsistency between DW directions. In our implementation, GRAPPA (generalized autocalibrating partial parallel acquisition) was incorporated for better phase estimation while allowing higher undersampling factor. An efficient and easy-to-implement image reconstruction algorithm, consisting mainly of partial Fourier update and singular value decomposition, was developed for solving PCLR. The error measures based on diffusion-tensor-derived metrics and tractography indicated that PCLR, with its joint reconstruction of all DW images using the image coherence, outperformed the frame-independent reconstruction through zero-padding FFT. Furthermore, using GRAPPA for phase estimation, PCLR readily achieved a four-fold undersampling. The PCLR is developed and demonstrated for compressive DW MRI. A four-fold reduction in k-space sampling could be readily achieved without substantial degradation of reconstructed images and diffusion tensor measures, making it possible to significantly reduce the data acquisition in DW MRI and/or improve spatial and angular resolutions. Copyright © 2013 Wiley Periodicals, Inc.

  18. Quantizing Constrained Systems New Perspectives

    CERN Document Server

    Kaplan, L; Heller, E J

    1997-01-01

    We consider quantum mechanics on constrained surfaces which have non-Euclidean metrics and variable Gaussian curvature. The old controversy about the ambiguities involving terms in the Hamiltonian of order hbar^2 multiplying the Gaussian curvature is addressed. We set out to clarify the matter by considering constraints to be the limits of large restoring forces as the constraint coordinates deviate from their constrained values. We find additional ambiguous terms of order hbar^2 involving freedom in the constraining potentials, demonstrating that the classical constrained Hamiltonian or Lagrangian cannot uniquely specify the quantization: the ambiguity of directly quantizing a constrained system is inherently unresolvable. However, there is never any problem with a physical quantum system, which cannot have infinite constraint forces and always fluctuates around the mean constraint values. The issue is addressed from the perspectives of adiabatic approximations in quantum mechanics, Feynman path integrals, a...

  19. Hydrological modelling of alpine headwaters using centurial glacier evolution, snow and long-term discharge dynamics

    Science.gov (United States)

    Kohn, Irene; Vis, Marc; Freudiger, Daphné; Seibert, Jan; Weiler, Markus; Stahl, Kerstin

    2016-04-01

    The response of alpine streamflows to long-term climate variations is highly relevant for the supply of water to adjacent lowlands. A key challenge in modelling high-elevation catchments is the complexity and spatial variability of processes, whereas data availability is rather often poor, restricting options for model calibration and validation. Glaciers represent a long-term storage component that changes over long time-scales and thus introduces additional calibration parameters into the modelling challenge. The presented study aimed to model daily streamflow as well as the contributions of ice and snow melt for all 49 of the River Rhine's glaciated headwater catchments over the long time-period from 1901 to 2006. To constrain the models we used multiple data sources and developed an adapted modelling framework based on an extended version of the HBV model that also includes a time-variable glacier change model and a conceptual representation of snow redistribution. In this study constraints were applied in several ways. A water balance approach was applied to correct precipitation input in order to avoid calibration of precipitation; glacier area change from maps and satellite products and information on snow depth and snow covered area were used for the calibration of each catchment model; and finally, specific seasonal and dynamic aspects of discharge were used for calibration. Additional data like glacier mass balances were used to evaluate the model in selected catchments. The modelling experiment showed that the long-term development of the coupled glacier and streamflow change was particularly important to constrain the model through an objective function incorporating three benchmarks of glacier retreat during the 20th Century. Modelling using only streamflow as calibration criteria had resulted in disproportionate under and over estimation of glacier retreat, even though the simulated and observed streamflow agreed well. Also, even short discharge time

  20. Simulating the Range Expansion of Spartina alterniflora in Ecological Engineering through Constrained Cellular Automata Model and GIS

    Directory of Open Access Journals (Sweden)

    Zongsheng Zheng

    2015-01-01

    Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.

  1. CONSTRAINING THE GRB-MAGNETAR MODEL BY MEANS OF THE GALACTIC PULSAR POPULATION

    Energy Technology Data Exchange (ETDEWEB)

    Rea, N. [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam (Netherlands); Gullón, M.; Pons, J. A.; Miralles, J. A. [Departament de Fisica Aplicada, Universitat d’Alacant, Ap. Correus 99, E-03080 Alacant (Spain); Perna, R. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Dainotti, M. G. [Physics Department, Stanford University, Via Pueblo Mall 382, Stanford, CA (United States); Torres, D. F. [Instituto de Ciencias de l’Espacio (ICE, CSIC-IEEC), Campus UAB, Carrer Can Magrans s/n, E-08193 Barcelona (Spain)

    2015-11-10

    A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <10{sup 5} s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between 2005 January and 2014 August. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type or rate at z = 0). Direct consequences would be that we should allow the existence of magnetars and “super-magnetars” having different progenitors, and that Type Ib/c SNe related to Long GRBs form systematically neutron stars with higher initial magnetic fields. We put an upper limit of ≤16 “super-magnetars” formed by a GRB in our Galaxy in the past Myr (at 99% c.l.). This limit is somewhat smaller than what is roughly expected from Long GRB rates, although the very large uncertainties do not allow us to draw strong conclusion in this respect.

  2. Revising the retrieval technique of a long-term stratospheric HNO3 data set: from a constrained matrix inversion to the optimal estimation algorithm

    Directory of Open Access Journals (Sweden)

    R. L. de Zafra

    2011-07-01

    Full Text Available The Ground-Based Millimeter-wave Spectrometer (GBMS was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS. Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W, Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2 profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better sensitivity in the Antarctic winter lower (upper stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1 GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method

  3. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  4. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  5. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    Science.gov (United States)

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  6. Improving prediction of hydraulic conductivity by constraining capillary bundle models to a maximum pore size

    Science.gov (United States)

    Iden, Sascha C.; Peters, Andre; Durner, Wolfgang

    2015-11-01

    The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.

  7. Constrained creation of poetic forms during theme-driven exploration of a domain defined by an N-gram model

    Science.gov (United States)

    Gervás, Pablo

    2016-04-01

    Most poetry-generation systems apply opportunistic approaches where algorithmic procedures are applied to explore the conceptual space defined by a given knowledge resource in search of solutions that might be aesthetically valuable. Aesthetical value is assumed to arise from compliance to a given poetic form - such as rhyme or metrical regularity - or from evidence of semantic relations between the words in the resulting poems that can be interpreted as rhetorical tropes - such as similes, analogies, or metaphors. This approach tends to fix a priori the aesthetic parameters of the results, and imposes no constraints on the message to be conveyed. The present paper describes an attempt to initiate a shift in this balance, introducing means for constraining the output to certain topics and allowing a looser mechanism for constraining form. This goal arose as a result of the need to produce poems for a themed collection commissioned to be included in a book. The solution adopted explores an approach to creativity where the goals are not solely aesthetic and where the results may be surprising in their poetic form. An existing computer poet, originally developed to produce poems in a given form but with no specific constraints on their content, is put to the task of producing a set of poems with explicit restrictions on content, and allowing for an exploration of poetic form. Alternative generation methods are devised to overcome the difficulties, and the various insights arising from these new methods and the impact they have on the set of resulting poems are discussed in terms of their potential contribution to better poetry-generation systems.

  8. Three-dimensional gravity modeling of Chicxulub Crater structure, constrained with marine seismic data and land boreholes

    Science.gov (United States)

    Batista-Rodríguez, J. A.; Pérez-Flores, M. A.; Urrutia-Fucugauchi, J.

    2013-09-01

    We present a three-dimensional multi-formation inversion model for the gravity anomaly over Chicxulub Crater, constrained with available marine seismic data and land boreholes. We used eight formations or rock units as initial model, corresponding to: sea water, Paleogene sediments, suevitic and bunte breccias, melt, Cretaceous carbonates and upper and lower crust. The model response fits 91.5% of the gravity data. Bottom topography and thickness plots for every formation are shown, as well as vertical cross-sections for the 3-D model. The resulting 3-D model shows slightly circular features at crater bottom topography, which are more prominent at the base of the breccias unit. These features are interpreted as normal faults oriented towards the crater center, revealing a circular graben-like structure, whose gravity response correlates with the rings observed in the horizontal gravity gradient. At the center of the model is the central uplift of upper and lower crust, with the top covered by an irregular melt layer. Top of the upper crust shows two protuberances that can be correlated with the two positive peaks of the gravity anomaly. Top of Cretaceous seems to influence most of the response to the gravity anomaly, associated with a high density contrast.

  9. Reconstructing the Last Glacial Maximum ice sheet in the Weddell Sea embayment, Antarctica, using numerical modelling constrained by field evidence

    Science.gov (United States)

    Le Brocq, A. M.; Bentley, M. J.; Hubbard, A.; Fogwill, C. J.; Sugden, D. E.; Whitehouse, P. L.

    2011-09-01

    The Weddell Sea Embayment (WSE) sector of the Antarctic ice sheet has been suggested as a potential source for a period of rapid sea-level rise - Meltwater Pulse 1a, a 20 m rise in ˜500 years. Previous modelling attempts have predicted an extensive grounding line advance in the WSE, to the continental shelf break, leading to a large equivalent sea-level contribution for the sector. A range of recent field evidence suggests that the ice sheet elevation change in the WSE at the Last Glacial Maximum (LGM) is less than previously thought. This paper describes and discusses an ice flow modelling derived reconstruction of the LGM ice sheet in the WSE, constrained by the recent field evidence. The ice flow model reconstructions suggest that an ice sheet consistent with the field evidence does not support grounding line advance to the continental shelf break. A range of modelled ice sheet surfaces are instead produced, with different grounding line locations derived from a novel grounding line advance scheme. The ice sheet reconstructions which best fit the field constraints lead to a range of equivalent eustatic sea-level estimates between approximately 1.4 and 3 m for this sector. This paper describes the modelling procedure in detail, considers the assumptions and limitations associated with the modelling approach, and how the uncertainty may impact on the eustatic sea-level equivalent results for the WSE.

  10. Constraining the $\\Lambda$CDM and Galileon models with recent cosmological data

    CERN Document Server

    Neveu, J; Astier, P; Besançon, M; Guy, J; Möller, A; Babichev, E

    2016-01-01

    The Galileon theory belongs to the class of modified gravity models that can explain the late-time accelerated expansion of the Universe. In previous works, cosmological constraints on the Galileon model were derived, both in the uncoupled case and with a disformal coupling of the Galileon field to matter. There, we showed that these models agree with the most recent cosmological data. In this work, we used updated cosmological data sets to derive new constraints on Galileon models, including the case of a constant conformal Galileon coupling to matter. We also explored the tracker solution of the uncoupled Galileon model. After updating our data sets, especially with the latest \\textit{Planck} data and BAO measurements, we fitted the cosmological parameters of the $\\Lambda$CDM and Galileon models. The same analysis framework as in our previous papers was used to derive cosmological constraints, using precise measurements of cosmological distances and of the cosmic structure growth rate. We showed that all te...

  11. A novel robust chance constrained possibilistic programming model for disaster relief logistics under uncertainty

    Directory of Open Access Journals (Sweden)

    Maryam Rahafrooz

    2016-09-01

    Full Text Available In this paper, a novel multi-objective robust possibilistic programming model is proposed, which simultaneously considers maximizing the distributive justice in relief distribution, minimizing the risk of relief distribution, and minimizing the total logistics costs. To effectively cope with the uncertainties of the after-disaster environment, the uncertain parameters of the proposed model are considered in the form of fuzzy trapezoidal numbers. The proposed model not only considers relief commodities priority and demand points priority in relief distribution, but also considers the difference between the pre-disaster and post-disaster supply abilities of the suppliers. In order to solve the proposed model, the LP-metric and the improved augmented ε-constraint methods are used. Second, a set of test problems are designed to evaluate the effectiveness of the proposed robust model against its equivalent deterministic form, which reveales the capabilities of the robust model. Finally, to illustrate the performance of the proposed robust model, a seismic region of northwestern Iran (East Azerbaijan is selected as a case study to model its relief logistics in the face of future earthquakes. This investigation indicates the usefulness of the proposed model in the field of crisis.

  12. Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach.

    Science.gov (United States)

    Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin

    2017-08-14

    The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.

  13. Constraining H{sub 0} in general dark energy models from Sunyaev-Zeldovich/X-ray technique and complementary probes

    Energy Technology Data Exchange (ETDEWEB)

    Holanda, R.F.L.; Lima, J.A.S. [Departamento de Astronomia (IAGUSP), Universidade de São Paulo, Rua do Matão 1226, 05508-900, São Paulo, SP (Brazil); Cunha, J.V. [Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, Rua Santa Adélia 166, 09210-170, Santo André, SP (Brazil); Marassi, L., E-mail: holanda@astro.iag.usp.br, E-mail: jvcunha@ufpa.br, E-mail: luciomarassi@ect.ufrn.br, E-mail: limajas@astro.iag.usp.br [Escola de Ciência e Tecnologia, UFRN, 59072-970, Natal, RN (Brazil)

    2012-02-01

    In accelerating dark energy models, the estimates of the Hubble constant, H{sub 0}, from Sunyaev-Zel'dovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Ω{sub M}), the curvature (Ω{sub K}) and the equation of state parameter (ω). In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical β model obtained through the SZE/X-ray technique, we constrain H{sub 0} in the framework of a general ΛCDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter ω = p{sub x}/ρ{sub x}. In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BAO) and the CMB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ΛCDM model H{sub 0} = 74{sup +5.0}{sub −4.0} km s{sup −1} Mpc{sup −1}(1σ) whereas for a flat universe with constant equation of state parameter we find H{sub 0} = 72{sup +5.5}{sub −4.0} km s{sup −1} Mpc{sup −1}(1σ). By assuming that galaxy clusters are described by a spherical β model these results change to H{sub 0} = 62{sup +8.0}{sub −7.0} and H{sub 0} = 59{sup +9.0}{sub −6.0} km s{sup −1} Mpc{sup −1}(1σ), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Hubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a flat ΛCDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very

  14. Use of remote-sensing reflectance to constrain a data assimilating marine biogeochemical model of the Great Barrier Reef

    Science.gov (United States)

    Jones, Emlyn M.; Baird, Mark E.; Mongin, Mathieu; Parslow, John; Skerratt, Jenny; Lovell, Jenny; Margvelashvili, Nugzar; Matear, Richard J.; Wild-Allen, Karen; Robson, Barbara; Rizwi, Farhan; Oke, Peter; King, Edward; Schroeder, Thomas; Steven, Andy; Taylor, John

    2016-12-01

    Skillful marine biogeochemical (BGC) models are required to understand a range of coastal and global phenomena such as changes in nitrogen and carbon cycles. The refinement of BGC models through the assimilation of variables calculated from observed in-water inherent optical properties (IOPs), such as phytoplankton absorption, is problematic. Empirically derived relationships between IOPs and variables such as chlorophyll-a concentration (Chl a), total suspended solids (TSS) and coloured dissolved organic matter (CDOM) have been shown to have errors that can exceed 100 % of the observed quantity. These errors are greatest in shallow coastal regions, such as the Great Barrier Reef (GBR), due to the additional signal from bottom reflectance. Rather than assimilate quantities calculated using IOP algorithms, this study demonstrates the advantages of assimilating quantities calculated directly from the less error-prone satellite remote-sensing reflectance (RSR). To assimilate the observed RSR, we use an in-water optical model to produce an equivalent simulated RSR and calculate the mismatch between the observed and simulated quantities to constrain the BGC model with a deterministic ensemble Kalman filter (DEnKF). The traditional assumption that simulated surface Chl a is equivalent to the remotely sensed OC3M estimate of Chl a resulted in a forecast error of approximately 75 %. We show this error can be halved by instead using simulated RSR to constrain the model via the assimilation system. When the analysis and forecast fields from the RSR-based assimilation system are compared with the non-assimilating model, a comparison against independent in situ observations of Chl a, TSS and dissolved inorganic nutrients (NO3, NH4 and DIP) showed that errors are reduced by up to 90 %. In all cases, the assimilation system improves the simulation compared to the non-assimilating model. Our approach allows for the incorporation of vast quantities of remote-sensing observations

  15. A Metabolite-Sensitive, Thermodynamically Constrained Model of Cardiac Cross-Bridge Cycling: Implications for Force Development during Ischemia

    KAUST Repository

    Tran, Kenneth

    2010-01-01

    We present a metabolically regulated model of cardiac active force generation with which we investigate the effects of ischemia on maximum force production. Our model, based on a model of cross-bridge kinetics that was developed by others, reproduces many of the observed effects of MgATP, MgADP, Pi, and H(+) on force development while retaining the force/length/Ca(2+) properties of the original model. We introduce three new parameters to account for the competitive binding of H(+) to the Ca(2+) binding site on troponin C and the binding of MgADP within the cross-bridge cycle. These parameters, along with the Pi and H(+) regulatory steps within the cross-bridge cycle, were constrained using data from the literature and validated using a range of metabolic and sinusoidal length perturbation protocols. The placement of the MgADP binding step between two strongly-bound and force-generating states leads to the emergence of an unexpected effect on the force-MgADP curve, where the trend of the relationship (positive or negative) depends on the concentrations of the other metabolites and [H(+)]. The model is used to investigate the sensitivity of maximum force production to changes in metabolite concentrations during the development of ischemia.

  16. The Use of Combined MODIS and MISR AOD to Constrain Biomass Burning Aerosol Emissions in the GOCART Model

    Science.gov (United States)

    Petrenko, M. M.; Kahn, R. A.; Chin, M.

    2013-05-01

    Aerosol models rely heavily on external emission inventories to simulate location and strength of biomass burning (BB) sources. These inventories, however, use different methods and assumptions to estimate aerosol emissions, and consequently their estimates differ, often by a factor of up to 8 globally and even more regionally. We have previously introduced a method of using snapshots of MODIS-measured aerosol optical depth (AOD) to constrain BB emissions in the GOCART model (M. M. Petrenko et al., 2012, JGR). This work builds up on the developed method and aims to (1) address some of the previously discussed method limitations, and (2) apply previously suggested corrections to the BB emissions used in the GOCART model. For example, we increased the number of studied smoke cases, and use MODIS AOD in combination with MISR AOD, which is expected to improve the satellite AOD we use as a reference. We apply previously developed quantitative relationship to correct the emission estimates and assess the performance of the corrected emissions in the model. We expect this method for correcting BB aerosol emissions to be useful to aerosol modelers as well as developers of emission inventories.

  17. Dust models post-Planck: constraining the far-infrared opacity of dust in the diffuse interstellar medium

    CERN Document Server

    Fanciullo, Lapo; Aniano, Gonzalo; Jones, Anthony P; Ysard, Nathalie; Miville-Deschênes, Marc-Antoine; Boulanger, François; Köhler, M

    2015-01-01

    We compare the performance of several dust models in reproducing the dust spectral energy distribution (SED) per unit extinction in the diffuse interstellar medium (ISM). We use our results to constrain the variability of the optical properties of big grains in the diffuse ISM, as published by the Planck collaboration. We use two different techniques to compare the predictions of dust models to data from the Planck HFI, IRAS and SDSS surveys. First, we fit the far-infrared emission spectrum to recover the dust extinction and the intensity of the interstellar radiation field (ISRF). Second, we infer the ISRF intensity from the total power emitted by dust per unit extinction, and then predict the emission spectrum. In both cases, we test the ability of the models to reproduce dust emission and extinction at the same time. We identify two issues. Not all models can reproduce the average dust emission per unit extinction: there are differences of up to a factor $\\sim2$ between models, and the best accord between ...

  18. Evaluation of unconstrained and constrained mathematical functions to model girth growth of rubber trees (Hevea brasiliensis) using young age measurements

    Institute of Scientific and Technical Information of China (English)

    T. R. Chandrasekhar

    2012-01-01

    No attempt has been made to date to model growth in girth of rubber tree (Hevea brasiliansis).We evaluated the few widely used growth functions to identify the most parsimonious and biologically reasonable model for describing the girth growth of young rubber trees based on an incomplete set of young age measurements.Monthly data for girth of immature trees (age 2 to 12 years) from two locations were subjected to modelling.Re-parameterized,unconstrained and constrained growth functions of Richards (RM),Gompertz (GM) and the monomolecular model (MM) were fitted to data.Duration of growth was the constraint introduced.In the first stage,we attempted a population average (PA) model to capture the trend in growth.The best PA model was fitted as a subject specific (SS) model.We used appropriate error variance-covariance structure to account for correlation due to repeated measurements over time.Unconstrained functions underestimated the asymptotic maximum that did not reflect the carrying capacity of the locations.Underestimations were attributed to the partial set of measurements made during the early growth phase of the trees.MM proved superior to RM and GM.In the random coefficient models,both Gf and G0 appeared to be influenced by tree level effects.Inclusion of diagonal definite positive matrix removed the correlation between random effects.The results were similar at both locations.In the overall assessment MM appeared as the candidate model for studying the girth-age relationships in Hevea trees.Based on the fitted model we conclude that,in Hevea trees,growth rate is maintained at maximum value at t0,then decreases until the final state at dG/dt ≥ 0,resulting in yield curve with no period of accelerating growth.One physiological explanation is that photosynthetic activity in Hevea trees decreases as girth increases and constructive metabolism is larger than destructive metabolism.

  19. Evaluating transit operator efficiency: An enhanced DEA model with constrained fuzzy-AHP cones

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-06-01

    Full Text Available This study addresses efforts to comb the Analytic Hierarchy Process (AHP with Data Envelopment Analysis (DEA to deliver a robust enhanced DEA model for transit operator efficiency assessment. The proposed model is designed to better capture inherent preferences information over input and output indicators by adding constraint cones to the conventional DEA model. A revised fuzzy-AHP model is employed to generate cones, where the proposed model features the integration of the fuzzy logic with a hierarchical AHP structure to: 1 normalize the scales of different evaluation indicators, 2 construct the matrix of pair-wise comparisons with fuzzy set, and 3 optimize the weight of each criterion with a non-linear programming model. With introduction of cone-based constraints, the new system offers accounting advantages in the interaction among indicators when evaluating the performance of transit operators. To illustrate the applicability of the proposed approach, a real case in Nanjing City, the capital of China's Jiangsu Province, has been selected to assess the efficiencies of seven bus companies based on 2009 and 2010 datasets. A comparison between conventional DEA and enhanced DEA was also conducted to clarify the new system's superiority. Results reveal that the proposed model is more applicable in evaluating transit operator's efficiency thus encouraging a boarder range of applications.

  20. Constrained parametric model for simultaneous inference of two cumulative incidence functions.

    Science.gov (United States)

    Shi, Haiwen; Cheng, Yu; Jeong, Jong-Hyeon

    2013-01-01

    We propose a parametric regression model for the cumulative incidence functions (CIFs) commonly used for competing risks data. The model adopts a modified logistic model as the baseline CIF and a generalized odds-rate model for covariate effects, and it explicitly takes into account the constraint that a subject with any given prognostic factors should eventually fail from one of the causes such that the asymptotes of the CIFs should add up to one. This constraint intrinsically holds in a nonparametric analysis without covariates, but is easily overlooked in a semiparametric or parametric regression setting. We hence model the CIF from the primary cause assuming the generalized odds-rate transformation and the modified logistic function as the baseline CIF. Under the additivity constraint, the covariate effects on the competing cause are modeled by a function of the asymptote of the baseline distribution and the covariate effects on the primary cause. The inference procedure is straightforward by using the standard maximum likelihood theory. We demonstrate desirable finite-sample performance of our model by simulation studies in comparison with existing methods. Its practical utility is illustrated in an analysis of a breast cancer dataset to assess the treatment effect of tamoxifen, adjusting for age and initial pathological tumor size, on breast cancer recurrence that is subject to dependent censoring by second primary cancers and deaths.

  1. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  2. Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD

    CERN Document Server

    Farias, R L S; Avancini, S S; Pinto, M B; Krein, G

    2016-01-01

    The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B,T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with a G(B, T) are compared with the ones obtained at constant coupling G. The model with a G(B,T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic field dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the running coupling can be easily implemented to improve typical model applications to magnetized quark matter.

  3. Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Ricardo L.S. [Universidade Federal de Santa Maria, Departamento de Fisica, Santa Maria, RS (Brazil); Kent State University, Physics Department, Kent, OH (United States); Timoteo, Varese S. [Universidade Estadual de Campinas (UNICAMP), Grupo de Optica e Modelagem Numerica (GOMNI), Faculdade de Tecnologia, Limeira, SP (Brazil); Avancini, Sidney S.; Pinto, Marcus B. [Universidade Federal de Santa Catarina, Departamento de Fisica, Florianopolis, Santa Catarina (Brazil); Krein, Gastao [Universidade Estadual Paulista, Instituto de Fisica Teorica, Sao Paulo, SP (Brazil)

    2017-05-15

    The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B, T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with G(B, T) are compared with the ones obtained at constant coupling, G. The model with G(B, T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic-field-dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the coupling constant can be easily implemented to improve typical model applications to magnetized quark matter. (orig.)

  4. Distance-constrained grid colouring

    Directory of Open Access Journals (Sweden)

    Aszalós László

    2016-06-01

    Full Text Available Distance-constrained colouring is a mathematical model of the frequency assignment problem. This colouring can be treated as an optimization problem so we can use the toolbar of the optimization to solve concrete problems. In this paper, we show performance of distance-constrained grid colouring for two methods which are good in map colouring.

  5. Modelling the firn thickness evolution during the last deglaciation: constrains on sensitivity to temperature and impurities

    OpenAIRE

    2016-01-01

    The transformation of snow into ice is a complex phenomenon difficult to model. Depending on surface temperature and accumulation rate, it may take several decades to millennia for air to be entrapped in ice. The air is thus always younger that the surrounding ice. The resulting gas-ice age difference is essential to document the phasing between CO2 and temperature changes especially during deglaciations. The air trapping depth can be inferred in the past using a firn densification model, or ...

  6. Improving prediction of hydraulic conductivity by constraining capillary bundle models to a maximum pore size

    Science.gov (United States)

    Iden, Sascha; Peters, Andre; Durner, Wolfgang

    2017-04-01

    Soil hydraulic properties are required to solve the Richards equation, the most widely applied model for variably-saturated flow. While the experimental determination of the water retention curve does not pose significant challenges, the measurement of unsaturated hydraulic conductivity is time consuming and costly. The prediction of the unsaturated hydraulic conductivity curve from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. A well-known problem of conductivity prediction for retention functions with wide pore-size distributions is the sharp drop in conductivity close to water saturation. This problematic behavior is well known for the van Genuchten model if the shape parameter n assumes values smaller than about 1.3. So far, the workaround for this artefact has been to introduce an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable and thus a discontinuous water capacity function. We present an improved parametrization of the hydraulic properties which uses the original capillary saturation function and introduces a maximum pore radius only in the pore-bundle model. Closed-form equations for the hydraulic conductivity function were derived for the unimodal and multimodal retention functions of van Genuchten and have been tested by sensitivity analysis and applied in curve fitting and inverse modeling of multistep outflow experiments. The resulting hydraulic conductivity function is smooth, increases monotonically close to saturation, and eliminates the sharp drop in conductivity close to saturation. Furthermore, the new model retains the smoothness and continuous differentiability of the water retention curve. We conclude that the resulting soil hydraulic functions are physically more reasonable than the ones predicted by previous approaches, and are thus ideally suited for numerical simulations

  7. Constraining snowmelt in a temperature-index model using simulated snow densities

    Science.gov (United States)

    Bormann, Kathryn J.; Evans, Jason P.; McCabe, Matthew F.

    2014-09-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027 g cm-3 to -0.004 g cm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136 mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  8. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.

    2014-09-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  9. Bayesian Evaluation of Inequality and Equality Constrained Hypotheses for Contingency Tables

    Science.gov (United States)

    Klugkist, Irene; Laudy, Olav; Hoijtink, Herbert

    2010-01-01

    In this article, a Bayesian model selection approach is introduced that can select the best of a set of inequality and equality constrained hypotheses for contingency tables. The hypotheses are presented in terms of cell probabilities allowing researchers to test (in)equality constrained hypotheses in a format that is directly related to the data.…

  10. Constraining millennial scale dynamics of a Greenland tidewater glacier for the verification of a calving criterion based numerical model

    Science.gov (United States)

    Lea, J.; Mair, D.; Rea, B.; Nick, F.; Schofield, E.

    2012-04-01

    The ability to successfully model the behaviour of Greenland tidewater glaciers is pivotal to understanding the controls on their dynamics and potential impact on global sea level. However, to have confidence in the results of numerical models in this setting, the evidence required for robust verification must extend well beyond the existing instrumental record. Perhaps uniquely for a major Greenland outlet glacier, both the advance and retreat dynamics of Kangiata Nunata Sermia (KNS), Nuuk Fjord, SW Greenland over the last ~1000 years can be reasonably constrained through a combination of geomorphological, sedimentological and archaeological evidence. It is therefore an ideal location to test the ability of the latest generation of calving criterion based tidewater models to explain millennial scale dynamics. This poster presents geomorphological evidence recording the post-Little Ice Age maximum dynamics of KNS, derived from high-resolution satellite imagery. This includes evidence of annual retreat moraine complexes suggesting controlled rather than catastrophic retreat between pinning points, in addition to a series of ice dammed lake shorelines, allowing detailed interpretation of the dynamics of the glacier as it thinned and retreated. Pending ground truthing, this evidence will contribute towards the calibration of results obtained from a calving criterion numerical model (Nick et al, 2010), driven by an air temperature reconstruction for the KNS region determined from ice core data.

  11. Study on the Stochastic Chance-Constrained Fuzzy Programming Model and Algorithm for Wagon Flow Scheduling in Railway Bureau

    Directory of Open Access Journals (Sweden)

    Bin Liu

    2012-01-01

    Full Text Available The wagon flow scheduling plays a very important role in transportation activities in railway bureau. However, it is difficult to implement in the actual decision-making process of wagon flow scheduling that compiled under certain environment, because of the interferences of uncertain information, such as train arrival time, train classify time, train assemble time, and flexible train-size limitation. Based on existing research results, considering the stochasticity of all kinds of train operation time and fuzziness of train-size limitation of the departure train, aimed at maximizing the satisfaction of departure train-size limitation and minimizing the wagon residence time at railway station, a stochastic chance-constrained fuzzy multiobjective model for flexible wagon flow scheduling problem is established in this paper. Moreover, a hybrid intelligent algorithm based on ant colony optimization (ACO and genetic algorithm (GA is also provided to solve this model. Finally, the rationality and effectiveness of the model and algorithm are verified through a numerical example, and the results prove that the accuracy of the train work plan could be improved by the model and algorithm; consequently, it has a good robustness and operability.

  12. Numerical optimization approach to modelling delamination and buckling of geometrically constrained structures.

    Science.gov (United States)

    Mullineux, G; Hicks, B J; Berry, C

    2012-04-28

    Understanding what happens in terms of delamination during buckling of laminate materials is of importance across a range of engineering sectors. Normally concern is that the strength of the material is not significantly impaired. Carton-board is a material with a laminate structure and, in the initial creation of carton nets, the board is creased in order to weaken the structure. This means that when the carton is eventually folded into its three-dimensional form, correct folding occurs along the weakened crease lines. Understanding what happens during creasing and folding is made difficult by the nonlinear nature of the material properties. This paper considers a simplified approach which extends the idea of minimizing internal energy so that the effects of delamination can be handled. This allows a simulation which reproduces the form of buckling-delamination observed in practice and the form of the torque-rotation relation.

  13. Constraining the $SU(2)_R$ breaking scale in naturally R-parity conserving supersymmetric models

    CERN Document Server

    Huitu, K; Puolamäki, K

    1997-01-01

    We obtain an upper bound on the right-handed breaking scale in naturally R-parity conserving general left-right supersymmetric models. This translates into an upper bound on the right-handed gauge boson mass, $m_{W_R}\\lsim M_{SUSY}$, where $M_{SUSY}$ is the scale of SUSY breaking. This bound is independent of any assumptions for the couplings of the model, and follows from $SU(3)_c$ and $U(1)_{em}$ gauge invariance of the ground state of the theory.

  14. Procyon: Constraining Its Temperature Structure with High-Precision Interferometry and 3-D Model Atmospheres

    Science.gov (United States)

    Aufdenberg, J. P.; Ludwig, H.-G.; Kervella, P.

    2004-12-01

    We have fit synthetic visibilities from 3-D (CO5BOLD + PHOENIX) and 1-D (PHOENIX, ATLAS12) model stellar atmospheres for Procyon (F5 IV) to high-precision interferometric data from the VINCI instrument at the VLT Interferometer (K-band) and from the Mark III interferometer (500 nm, 800 nm). These data provide a test of theoretical wavelength-dependent limb-darkening predictions, and therefore Procyon's atmospheric temperature structure. Earlier work (Allende Prieto et al. 2002 ApJ 567, 544) has shown that the temperature structure from a spatially and temporally averaged 3-D hydrodynamical model produces significantly less limb darkening at 500 nm relative to the temperature structure from a 1-D MARCS model atmosphere which uses a mixing-length approximation for convective flux transport. Our direct fits to the interferometric data confirm this prediction, however we find that not all 1-D models fail to reproduce the observations. The key to matching the interferometric data is a shallower temperature gradient than provided by the standard 1-D mixing-length approximation. We find that in addition to our best fitting 3-D hydrodynamical model, a 1-D ATLAS12 model, with an additional free parameter for ``approximate overshooting'', provides the required temperature gradient. We estimate that an interferometric precision better than 0.1% will be required to distinguish between the 3-D model and the ATLAS12 model. This overshooting approximation has been shown to match Solar limb-darkening observations reasonably well (Castelli et al 1997 A&A 324, 432), however published work since using Strömgren photometry of solar-type stars has cast doubt on the importance of overshooting. We have also compared synthetic spectral energy distributions for Procyon to ultraviolet, optical and near-infrared spectrophotometry and find differences from comparisons to Strömgren photometry alone. This work was performed in part contract with the Jet Propulsion Laboratory (JPL) funded by

  15. Constraining dark photon model with dark matter from CMB spectral distortions

    Directory of Open Access Journals (Sweden)

    Ki-Young Choi

    2017-08-01

    Full Text Available Many extensions of Standard Model (SM include a dark sector which can interact with the SM sector via a light mediator. We explore the possibilities to probe such a dark sector by studying the distortion of the CMB spectrum from the blackbody shape due to the elastic scatterings between the dark matter and baryons through a hidden light mediator. We in particular focus on the model where the dark sector gauge boson kinetically mixes with the SM and present the future experimental prospect for a PIXIE-like experiment along with its comparison to the existing bounds from complementary terrestrial experiments.

  16. Assessing water resources in Azerbaijan using a local distributed model forced and constrained with global data

    Science.gov (United States)

    Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine

    2017-04-01

    In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well

  17. Dust models post-Planck: constraining the far-infrared opacity of dust in the diffuse interstellar medium

    Science.gov (United States)

    Fanciullo, L.; Guillet, V.; Aniano, G.; Jones, A. P.; Ysard, N.; Miville-Deschênes, M.-A.; Boulanger, F.; Köhler, M.

    2015-08-01

    Aims: We compare the performance of several dust models in reproducing the dust spectral energy distribution (SED) per unit extinction in the diffuse interstellar medium (ISM). We use our results to constrain the variability of the optical properties of big grains in the diffuse ISM, as published by the Planck collaboration. Methods: We use two different techniques to compare the predictions of dust models to data from the Planck HFI, IRAS, and SDSS surveys. First, we fit the far-infrared emission spectrum to recover the dust extinction and the intensity of the interstellar radiation field (ISRF). Second, we infer the ISRF intensity from the total power emitted by dust per unit extinction, and then predict the emission spectrum. In both cases, we test the ability of the models to reproduce dust emission and extinction at the same time. Results: We identify two issues. Not all models can reproduce the average dust emission per unit extinction: there are differences of up to a factor ~2 between models, and the best accord between model and observation is obtained with the more emissive grains derived from recent laboratory data on silicates and amorphous carbons. All models fail to reproduce the variations in the emission per unit extinction if the only variable parameter is the ISRF intensity: this confirms that the optical properties of dust are indeed variable in the diffuse ISM. Conclusions: Diffuse ISM observations are consistent with a scenario where both ISRF intensity and dust optical properties vary. The ratio of the far-infrared opacity to the V band extinction cross-section presents variations of the order of ~20% (40-50% in extreme cases), while ISRF intensity varies by ~30% (~60% in extreme cases). This must be accounted for in future modelling. Appendices are available in electronic form at http://www.aanda.org

  18. Broad range of 2050 warming from an observationally constrained large climate model ensemble

    Science.gov (United States)

    Rowlands, Daniel J.; Frame, David J.; Ackerley, Duncan; Aina, Tolu; Booth, Ben B. B.; Christensen, Carl; Collins, Matthew; Faull, Nicholas; Forest, Chris E.; Grandey, Benjamin S.; Gryspeerdt, Edward; Highwood, Eleanor J.; Ingram, William J.; Knight, Sylvia; Lopez, Ana; Massey, Neil; McNamara, Frances; Meinshausen, Nicolai; Piani, Claudio; Rosier, Suzanne M.; Sanderson, Benjamin M.; Smith, Leonard A.; Stone, Dáithí A.; Thurston, Milo; Yamazaki, Kuniko; Hiro Yamazaki, Y.; Allen, Myles R.

    2012-04-01

    Incomplete understanding of three aspects of the climate system--equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing--and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century. Explorations of these uncertainties have so far relied on scaling approaches, large ensembles of simplified climate models, or small ensembles of complex coupled atmosphere-ocean general circulation models which under-represent uncertainties in key climate system properties derived from independent sources. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere-ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4-3K by 2050, relative to 1961-1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report, but extends towards larger warming than observed in ensembles-of-opportunity typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range `no mitigation' scenario for greenhouse-gas emissions.

  19. Gravitational wave observations may constrain gamma-ray burst models: the case of GW 150914 - GBM

    CERN Document Server

    Veres, P; Goldstein, A; Mészáros, P; Burns, E; Connaughton, V

    2016-01-01

    The possible short gamma-ray burst (GRB) observed by {\\it Fermi}/GBM in coincidence with the first gravitational wave (GW) detection, offers new ways to test GRB prompt emission models. Gravitational wave observations provide previously unaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW~150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density ($\\sim 10^{-3}$ cm$^{-3}$) and a high Lorentz factor ($\\sim 2000$). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Bl...

  20. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images

    Science.gov (United States)

    Erdt, Marius; Sakas, Georgios

    2010-03-01

    This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.

  1. Joint modeling of constrained path enumeration and path choice behavior: a semi-compensatory approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2010-01-01

    A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...... constraints are related to travellers’ socio-economic characteristics and that path choice is related to minimizing time and avoiding congestion....

  2. Constraining models of twin peak quasi-periodic oscillations with realistic neutron star equations of state

    CERN Document Server

    Török, Gabriel; Urbanec, Martin; Šrámková, Eva; Adámek, Karel; Urbancová, Gabriela; Pecháček, Tomáš; Bakala, Pavel; Stuchlík, Zdeněk; Horák, Jiří; Juryšek, Jakub

    2016-01-01

    Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass-angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle-Thorne spacetimes. We show that the application of concrete EoS together with a particular QPO model yields a specific mass-angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set ...

  3. Constraining the last 7 billion years of galaxy evolution in semi-analytic models

    CERN Document Server

    Mutch, Simon J; Croton, Darren J

    2012-01-01

    We investigate the ability of the Croton et al. (2006) semi-analytic model to reproduce the evolution of observed galaxies across the final 7 billion years of cosmic history. Using Monte-Carlo Markov Chain techniques we explore the available parameter space to produce a model which attempts to achieve a statistically accurate fit to the observed stellar mass function at z=0 and z~0.8, as well as the local black hole-bulge relation. We find that in order to be successful we are required to push supernova feedback efficiencies to extreme limits which are, in some cases, unjustified by current observations. This leads us to the conclusion that the current model may be incomplete. Using the posterior probability distributions provided by our fitting, as well as the qualitative details of our produced stellar mass functions, we suggest that any future model improvements must act to preferentially bolster star formation efficiency in the most massive halos at high redshift.

  4. Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model

    Science.gov (United States)

    Bianchi, Daniele; Galbraith, Eric D.

    2017-01-01

    Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change. PMID:28103280

  5. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles

  6. Constrained Run-to-Run Optimization for Batch Process Based on Support Vector Regression Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An iterative (run-to-run) optimization method was presented for batch processes under input constraints. Generally it is very difficult to acquire an accurate mechanistic model for a batch process. Because support vector machine is powerful for the problems characterized by small samples, nonlinearity, high dimension and local minima, support vector regression models were developed for the end-point optimization of batch processes. Since there is no analytical way to find the optimal trajectory, an iterative method was used to exploit the repetitive nature of batch processes to determine the optimal operating policy. The optimization algorithm is proved convergent. The numerical simulation shows that the method can improve the process performance through iterations.

  7. Fuzzy force control of constrained robot manipulators based on impedance model in an unknown environment

    Institute of Scientific and Technical Information of China (English)

    LIU Hongyi; WANG Lei; WANG Fei

    2007-01-01

    To precisely implement the force control of robot manipulators in an unknown environment,a control strategy based on fuzzy prediction of the reference trajectory in the impedance model is developed.The force tracking experiments are executed in an open-architecture control system with different tracking velocities,different desired forces,different contact stiffnesses and different surface figurations.The corresponding force control results are compared and analyzed.The influences of unknown parameters of the environment on the contact force are analyzed based on experimental data,and the tunings of predictive scale factors are illustrated.The experimental results show that the desired trajectory in the impedance model is predicted exactly and rapidly in the cases that the contact surface is unknown,the contact stiffness changes,and the fuzzy force control algorithm has high adaptability to the unknown environment.

  8. von Bertalanffy 1.0: a COBRA toolbox extension to thermodynamically constrain metabolic models.

    Science.gov (United States)

    Fleming, Ronan M T; Thiele, Ines

    2011-01-01

    In flux balance analysis of genome scale stoichiometric models of metabolism, the principal constraints are uptake or secretion rates, the steady state mass conservation assumption and reaction directionality. Here, we introduce an algorithmic pipeline for quantitative assignment of reaction directionality in multi-compartmental genome scale models based on an application of the second law of thermodynamics to each reaction. Given experimental or computationally estimated standard metabolite species Gibbs energy and metabolite concentrations, the algorithms bounds reaction Gibbs energy, which is transformed to in vivo pH, temperature, ionic strength and electrical potential. This cross-platform MATLAB extension to the COnstraint-Based Reconstruction and Analysis (COBRA) toolbox is computationally efficient, extensively documented and open source. http://opencobra.sourceforge.net.

  9. Model Data Fusion: developing Bayesian inversion to constrain equilibrium and mode structure

    CERN Document Server

    Hole, M J; Bertram, J; Svensson, J; Appel, L C; Blackwell, B D; Dewar, R L; Howard, J

    2010-01-01

    Recently, a new probabilistic "data fusion" framework based on Bayesian principles has been developed on JET and W7-AS. The Bayesian analysis framework folds in uncertainties and inter-dependencies in the diagnostic data and signal forward-models, together with prior knowledge of the state of the plasma, to yield predictions of internal magnetic structure. A feature of the framework, known as MINERVA (J. Svensson, A. Werner, Plasma Physics and Controlled Fusion 50, 085022, 2008), is the inference of magnetic flux surfaces without the use of a force balance model. We discuss results from a new project to develop Bayesian inversion tools that aim to (1) distinguish between competing equilibrium theories, which capture different physics, using the MAST spherical tokamak; and (2) test the predictions of MHD theory, particularly mode structure, using the H-1 Heliac.

  10. Modeling dark matter subhalos in a constrained galaxy: Global mass and boosted annihilation profiles

    CERN Document Server

    Stref, Martin

    2016-01-01

    The interaction properties of cold dark matter (CDM) particle candidates, such as those of weakly interacting massive particles (WIMPs), generically lead to the structuring of dark matter on scales much smaller than typical galaxies, potentially down to $\\sim 10^{-10}M_\\odot$. This clustering translates into a very large population of subhalos in galaxies and affects the predictions for direct and indirect dark matter searches (gamma rays and antimatter cosmic rays). In this paper, we elaborate on previous analytic works to model the Galactic subhalo population, while consistently with current observational dynamical constraints on the Milky Way. In particular, we propose a self-consistent method to account for tidal effects induced by both dark matter and baryons. Our model does not strongly rely on cosmological simulations as they can hardly be fully matched to the real Milky Way, but for setting the initial subhalo mass fraction. Still, it allows to recover the main qualitative features of simulated system...

  11. Numerical forecasts for lab experiments constraining modified gravity: the chameleon model

    CERN Document Server

    Schlogel, Sandrine; Fuzfa, Andre

    2015-01-01

    Current acceleration of the cosmic expansion leads to coincidence as well as fine-tuning issues in the framework of general relativity. Dynamical scalar fields have been introduced in response of these problems, some of them invoking screening mechanisms for passing local tests of gravity. Recent lab experiments based on atom interferometry in a vacuum chamber have been proposed for testing modified gravity models. So far only analytical computations have been used to provide forecasts. We derive numerical solutions for chameleon models that take into account the effect of the vacuum chamber wall and its environment. With this realistic profile of the chameleon field in the chamber, we refine the forecasts that were derived analytically. We finally highlight specific effects due to the vacuum chamber that are potentially interesting for future experiments.

  12. An experimentally constrained MHD model for a collisional, rotating plasma column

    Science.gov (United States)

    Wright, A. M.; Qu, Z. S.; Caneses, J. F.; Hole, M. J.

    2017-02-01

    A steady-state single fluid MHD model which describes the equilibrium of plasma parameters in a collisional, rotating plasma column with temperature gradients and a non-uniform externally applied magnetic field is developed. Two novel methods of simplifying the governing equations are introduced. Specifically, a ‘radial transport constraint’ and an ordering argument are applied. The reduced system is subsequently solved to yield the equilibrium of macroscopic plasma parameters in the bulk region of the plasma. The model is benchmarked by comparing these solutions to experimental measurements of axial velocity and density for a hydrogen plasma in the converging-field experiment MAGPIE and overall a good agreement is observed. The plasma equilibrium is determined by the interaction of a density gradient, due to a temperature gradient, with an electric field. The magnetic field and temperature gradient are identified as key parameters in determining the flow profile, which may be important considerations in other applications.

  13. A jet model for Galactic black-hole X-ray sources: Some constraining correlations

    CERN Document Server

    Kylafis, N D; Reig, P; Giannios, D; Pooley, G G

    2008-01-01

    Some recent observational results impose significant constraints on all the models that have been proposed to explain the Galactic black-hole X-ray sources in the hard state. In particular, it has been found that during the hard state of Cyg X-1 the power-law photon number spectral index is correlated with the average time lag between hard and soft X-rays. Furthermore, the peak frequencies of the four Lorentzians that fit the observed power spectra are correlated with both the photon index and the time lag. We performed Monte Carlo simulations of Compton upscattering of soft, accretion-disk photons in the jet and computed the time lag between hard and soft photons and the power-law index of the resulting photon number spectra. We demonstrate that our jet model naturally explains the above correlations, with no additional requirements and no additional parameters.

  14. The replica symmetric solution for orthogonally constrained Heisenberg model on Bethe lattice

    Science.gov (United States)

    Concetti, Francesco

    2017-02-01

    In this paper, we study the thermodynamic properties of a system of D-components classical Heisenberg spins lying on the vertices of a random regular graph, with an unconventional first neighbor non-random interaction J{{≤ft({{\\mathbf{S}}i}\\centerdot {{\\mathbf{S}}k}\\right)}2} . We can consider this model as a continuum version of anti-ferromagnetic D-states Potts model. We compute the paramagnetic free energy, using a new approach, presented in this paper for the first time, based on the replica method. Through the linear stability analysis, we obtain an instability line on the temperature-connectivity plane that provides a bound to the appearance of a phase transition. We also argue about the character of the instability observed.

  15. Constrained optimisation of the parameters for a simple isostatic Moho model

    Science.gov (United States)

    Lane, R. J.

    2010-12-01

    In a regional-scale integrated 3D crustal mapping project for the offshore Capel-Faust region, approximately 800 km east of the Australian east coast, gravity data were being used by the Geoscience Australia Remote Eastern Frontiers team to evaluate the viability of an interpretation of the upper crustal sequence that had been derived from a network of 2D seismic lines. A preliminary gravity forward modelling calculation for this sequence using mass density values derived from limited well log and seismic velocity information indicated a long wavelength misfit between this response and the observed data. Rather than draw upon a mathematical function to account for this component of the model response (e.g., low order polynomial), a solution that would lack geological significance, I chose to first investigate whether the gravity response stemming from the density contrast across the crust-mantle boundary (i.e., the Moho) could account for this misfit. The available direct observations to build the Moho surface in the 3D geological map were extremely sparse, however. The 2D seismic data failed to provide any information on the Moho. The only constraints on the depth to this interface within the project area were from 2 seismic refraction soundings. These soundings were in the middle of a set of 11 soundings forming a profile across the Lord Howe Rise. The use of relatively high resolution bathymetry data coupled with an Airy-Heiskanen isostatic model assumption was investigated as a means of defining the form of the Moho surface. The suitability of this isostatic assumption and associated simple model were investigated through optimisation of the model parameters. The Moho depths interpreted from the seismic refraction profile were used as the observations in this exercise. The output parameters were the average depth to the Moho (Tavg), upper crust density (RHOzero), and density contrast across the lower crust and upper mantle (RHOone). The model inputs were a grid

  16. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.

    2014-10-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  17. Model-Constrained Optimization Methods for Reduction of Parameterized Large-Scale Systems

    Science.gov (United States)

    2007-05-01

    colorful with his stereo karaoke system. Anh Hai, thanks for helping me move my furnitures many times, and for all the beers too! To all Vietnamese...visit them. My trips to Springfield would have been very boring if Anh Tung (+ Thao) and Anh Danh (+ Thuy) had not turn on their super stereo karaoke ...expensive to solve, e.g. for applications such as optimal design or probabilistic analyses. Model order reduction is a powerful tool that permits the

  18. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    Science.gov (United States)

    Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu

    2017-09-01

    The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  19. The Herschel Orion Protostar Survey: Constraining Protostellar Models with Mid-Infrared Spectroscopy

    Science.gov (United States)

    Furlan, Elise; HOPS Team

    2013-01-01

    During the protostellar stage of star formation, a young star is surrounded by a large infalling envelope of dust and gas; the material falls onto a circumstellar disk and is eventually accreted by the central star. The dust in the disk and envelope emits prominently at mid- to far-infrared wavelengths; at 10 micron, absorption by small silicate grains causes a broad absorption feature. By modeling the near- to far-IR spectral energy distributions (SEDs) of protostars, properties of their disks and envelopes can be derived; in particular, mid-IR spectroscopy reveals the detailed emission around the silicate absorption feature and thus provides additional constraints for the models. Here we present results from modeling a sample of protostars in the Orion star-forming region that were observed as part of the Herschel Orion Protostar Survey (HOPS). These protostars represent a subsample of HOPS; they have Spitzer/IRS spectra, which cover the mid-IR SED from 5 to 35 micron, and photometry in the near-IR (2MASS), mid-IR (Spitzer/IRAC and MIPS), and far-IR (Herschel/PACS). We show the importance of adding Spitzer/IRS spectra with appropriate weights in determining the best fit to the SED from a large grid of protostellar models. The 10 micron silicate absorption feature and the mid- to far-IR SED slope provide key constraints for the inclination angle of the object and its envelope density, with a deep absorption feature and steep SED slope for the most embedded and highly inclined objects. We show a few examples that illustrate our SED fitting method and present preliminary results from our fits.

  20. A Parameter Study of Classical Be Star Disk Models Constrained by Optical Interferometry

    Science.gov (United States)

    2008-11-01

    Piskunov (2003); these profiles are based on the hydrogen populations computed for the appropriate LTE, line-blanketed model atmosphere adopted from...physical conditions throughout the disk to be retained. The Stark broadening routines of Barklem & Piskunov (2003) were also used to compute the local H...as an NSERC USRA in 2006. Facilities: Hall, NPOI REFERENCES Armstrong, J. T., et al. 1998, ApJ, 496, 550 Barklem, P. S., & Piskunov , N. E. 2003, in IAU

  1. Physics Constrained Stochastic-Statistical Models for Extended Range Environmental Prediction

    Science.gov (United States)

    2014-09-30

    Ocean through a low- dimensional family of spatiotemporal modes extracted from global circulation model (GCM) output and satellite observations using...analysis in [1] to cover the whole of the Arctic , and to include both ocean and atmosphere variables [sea surface temperature (SST) and sea level...818) 393-3379 email: waliser@ucla.edu Informal CO-P. I. Dimitrios Giannakis Department of Mathematics and Center for Atmosphere Ocean Science

  2. Robust Penalty Adaptive Model Predictive Control (PAMPC) of constrained, underdamped, non-collocated systems

    OpenAIRE

    Dutta, Abhishek; Ionescu, Clara-Mihaela; Loccufier, Mia; De Keyser, Robain

    2016-01-01

    This paper investigates the control challenges posed by noncollocated mechatronic systems and motivates the need for a model-based control technique towards such systems. A novel way of online constraint handling by penalty adaptation (PAMPC) is proposed and shown to be of particular relevance towards robust control of underdamped, noncollocated systems by exploiting the structure of such systems. Further, a new tunneling approach is proposed for PAMPC to maintain feasibility under uncertaint...

  3. ICCLP: An Inexact Chance-Constrained Linear Programming Model for Land-Use Management of Lake Areas in Urban Fringes

    Science.gov (United States)

    Liu, Yong; Qin, Xiaosheng; Guo, Huaicheng; Zhou, Feng; Wang, Jinfeng; Lv, Xiaojian; Mao, Guozhu

    2007-12-01

    Lake areas in urban fringes are under increasing urbanization pressure. Consequently, the conflict between rapid urban development and the maintenance of water bodies in such areas urgently needs to be addressed. An inexact chance-constrained linear programming (ICCLP) model for optimal land-use management of lake areas in urban fringes was developed. The ICCLP model was based on land-use suitability assessment and land evaluation. The maximum net economic benefit (NEB) was selected as the objective of land-use allocation. The total environmental capacity (TEC) of water systems and the public financial investment (PFI) at different probability levels were considered key constraints. Other constraints included in the model were land-use suitability, governmental requirements on the ratios of various land-use types, and technical constraints. A case study implementing the system was performed for the lake area of Hanyang at the urban fringe of Wuhan, central China, based on our previous study on land-use suitability assessment. The Hanyang lake area is under significant urbanization pressure. A 15-year optimal model for land-use allocation is proposed during 2006 to 2020 to better protect the water system and to gain the maximum benefits of development. Sixteen constraints were set for the optimal model. The model results indicated that NEB was between 1.48 × 109 and 8.76 × 109 or between 3.98 × 109 and 16.7 × 109, depending on the different urban-expansion patterns and land demands. The changes in total developed area and the land-use structure were analyzed under different probabilities ( q i ) of TEC. Changes in q i resulted in different urban expansion patterns and demands on land, which were the direct result of the constraints imposed by TEC and PFI. The ICCLP model might help local authorities better understand and address complex land-use systems and develop optimal land-use management strategies that better balance urban expansion and grassland

  4. Dynamics of asteroid family halos constrained by spin/shape models

    Science.gov (United States)

    Broz, Miroslav

    2016-10-01

    A number of asteroid families cannot be identified solely on the basis of the Hierarchical Clustering Method (HCM), because they have additional 'former' members in the surroundings which constitute a so called halo (e.g. Broz & Morbidelli 2013). They are usually mixed up with the background population which has to be taken into account too.Luckily, new photometric observations allow to derive new spin/shape models, which serve as independent constraints for dynamical models. For example, a recent census of the Eos family shows 43 core and 27 halo asteroids (including background) with known spin orientations.To this point, we present a complex spin-orbital model which includes full N-body dynamics and consequently accounts for all mean-motion, secular, or three-body gravitational resonances, the Yarkovsky drift, YORP effect, collisional reorientations and also spin-orbital interactions. These are especially important for the Koronis family. In this project, we make use of data from the DAMIT database and ProjectSoft Blue Eye 600 observatory.

  5. Constraining supernova models using the hot gas in clusters of galaxies

    CERN Document Server

    De Plaa, J; Bleeker, J A M; Vink, J; Kaastra, J S; Méndez, M; Vink, Jacco

    2007-01-01

    The hot Intra-Cluster Medium (ICM) in clusters of galaxies is a very large repository of metals produced by supernovae. We aim to accurately measure the abundances in the ICM of many clusters and compare these data with metal yields produced by supernovae. Using the data archive of the XMM-Newton X-ray observatory, we compile a sample of 22 clusters. We fit spectra extracted from the core regions and determine the abundances of silicon, sulfur, argon, alcium, iron, and nickel. The abundances from the spectral fits are subsequently fitted to supernova yields determined from several supernova type Ia and core-collapse supernova models. We find that the argon and calcium abundances cannot be fitted with currently favoured supernova type Ia models. We obtain a major improvement of the fit, when we use an empirically modified delayed-detonation model that is calibrated on the Tycho supernova remnant. The two modified parameters are the density where the sound wave in the supernova turns into a shock and the ratio ...

  6. Estimating the properties of hard X-ray solar flares by constraining model parameters

    CERN Document Server

    Ireland, Jack; Schwartz, Richard A; Holman, Gordon D; Dennis, Brian R

    2013-01-01

    We compare four different methods of calculating uncertainty estimates in fitting parameterized models to RHESSI X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the GOES X1.3 class flare of 19 January 2005, and the other from the X4.8 flare of 23 July 2002. The four methods give approximately the same uncertainty estimates for the 19 January 2005 spectral fit parameters, but lead to very different uncertainty estimates for the 23 July 2002 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent re...

  7. Constraining Cretaceous subduction polarity in eastern Pacific from seismic tomography and geodynamic modeling

    Science.gov (United States)

    Liu, Lijun

    2014-11-01

    Interpretation of recent mantle seismic images below the America ignited a debate on the Cretaceous subduction polarity in the eastern Pacific Ocean. The traditional view is that the massive vertical slab wall under eastern North America resulted from an eastward Farallon subduction. An alternative interpretation attributes this prominent seismic structure to a westward subduction of the North American Plate against a stationary intraoceanic trench. Here I design quantitative subduction models to test these two scenarios, using their implied plate kinematics as velocity boundary conditions. Modeling results suggest that the westward subduction scenario could not produce enough slab volume as seismic images reveal, as is due to the overall slow subduction rate (~2.5 cm/yr). The results favor the continuous eastward Farallon subduction scenario, which, with an average convergence rate of >10 cm/yr prior to the Eocene, can properly generate both the volume and the geometry of the imaged lower mantle slab pile. The eastward subduction model is also consistent with most Cretaceous geological records along the west coast of North America.

  8. Coupling geophysical investigation with hydrothermal modeling to constrain the enthalpy classification of a potential geothermal resource.

    Science.gov (United States)

    White, Jeremy T.; Karakhanian, Arkadi; Connor, Chuck; Connor, Laura; Hughes, Joseph D.; Malservisi, Rocco; Wetmore, Paul

    2015-01-01

    An appreciable challenge in volcanology and geothermal resource development is to understand the relationships between volcanic systems and low-enthalpy geothermal resources. The enthalpy of an undeveloped geothermal resource in the Karckar region of Armenia is investigated by coupling geophysical and hydrothermal modeling. The results of 3-dimensional inversion of gravity data provide key inputs into a hydrothermal circulation model of the system and associated hot springs, which is used to evaluate possible geothermal system configurations. Hydraulic and thermal properties are specified using maximum a priori estimates. Limited constraints provided by temperature data collected from an existing down-gradient borehole indicate that the geothermal system can most likely be classified as low-enthalpy and liquid dominated. We find the heat source for the system is likely cooling quartz monzonite intrusions in the shallow subsurface and that meteoric recharge in the pull-apart basin circulates to depth, rises along basin-bounding faults and discharges at the hot springs. While other combinations of subsurface properties and geothermal system configurations may fit the temperature distribution equally well, we demonstrate that the low-enthalpy system is reasonably explained based largely on interpretation of surface geophysical data and relatively simple models.

  9. Constraining the Z' mass in 331 models using direct dark matter detection

    Energy Technology Data Exchange (ETDEWEB)

    Profumo, Stefano; Queiroz, Farinaldo S. [University of California, Department of Physics, Santa Cruz Institute for Particle Physics, Santa Cruz, CA (United States)

    2014-07-15

    We investigate a so-called 331 extension of the Standard Model gauge sector which accommodates neutrino masses and where the lightest of the new neutral fermions in the theory is a viable particle dark matter candidate. In this model, processes mediated by the additional Z' gauge boson set both the dark matter relic abundance and the scattering cross section off of nuclei. We calculate with unprecedented accuracy the dark matter relic density, including the important effect of coannihilation across the heavy fermion sector, and show that indeed the candidate particle has the potential of having the observed dark matter density. We find that the recent LUX results put very stringent bounds on the mass of the extra gauge boson, M{sub Z'} >or similar TeV, independently of the dark matter mass. We also comment on the regime where our bounds on the Z' mass may apply to generic 331-like models, and on implications for LHC phenomenology. (orig.)

  10. A Rough Set Bounded Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation.

    Science.gov (United States)

    Ji, Zexuan; Huang, Yubo; Sun, Quansen; Cao, Guo; Zheng, Yuhui

    2017-01-01

    Accurate image segmentation is an important issue in image processing, where Gaussian mixture models play an important part and have been proven effective. However, most Gaussian mixture model (GMM) based methods suffer from one or more limitations, such as limited noise robustness, over-smoothness for segmentations, and lack of flexibility to fit data. In order to address these issues, in this paper, we propose a rough set bounded asymmetric Gaussian mixture model with spatial constraint for image segmentation. First, based on our previous work where each cluster is characterized by three automatically determined rough-fuzzy regions, we partition the target image into three rough regions with two adaptively computed thresholds. Second, a new bounded indicator function is proposed to determine the bounded support regions of the observed data. The bounded indicator and posterior probability of a pixel that belongs to each sub-region is estimated with respect to the rough region where the pixel lies. Third, to further reduce over-smoothness for segmentations, two novel prior factors are proposed that incorporate the spatial information among neighborhood pixels, which are constructed based on the prior and posterior probabilities of the within- and between-clusters, and considers the spatial direction. We compare our algorithm to state-of-the-art segmentation approaches in both synthetic and real images to demonstrate the superior performance of the proposed algorithm.

  11. Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran

    Science.gov (United States)

    Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa

    2017-02-01

    The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.

  12. Constraining central Neo-Tethys Ocean reconstructions with mantle convection models

    Science.gov (United States)

    Nerlich, Rainer; Colli, Lorenzo; Ghelichkhan, Siavash; Schuberth, Bernhard; Bunge, Hans-Peter

    2017-04-01

    A striking feature of the Indian Ocean is a distinct geoid low south of India, pointing to a regionally anomalous mantle density structure. Equally prominent are rapid plate convergence rate variations between India and SE Asia, particularly in Late Cretaceous/Paleocene times. Both observations are linked to the central Neo-Tethys Ocean subduction history, for which competing scenarios have been proposed. Here we evaluate three alternative reconstructions by assimilating their associated time-dependent velocity fields in global high-resolution geodynamic Earth models, allowing us to predict the resulting seismic mantle heterogeneity and geoid signal. Our analysis reveals that a geoid low similar to the one observed develops naturally when a long-lived back-arc basin south of Eurasia's paleomargin is assumed. A quantitative comparison to seismic tomography further supports this model. In contrast, reconstructions assuming a single northward dipping subduction zone along Eurasia's margin or models incorporating a temporary southward dipping intraoceanic subduction zone cannot sufficiently reproduce geoid and seismic observations.

  13. Constrained optimization of combustion in a simulated coal-fired boiler using artificial neural network model and information analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ji-Zheng Chu; Shyan-Shu Shieh; Shi-Shang Jang; Chuan-I Chien; Hou-Peng Wan; Hsu-Hsun Ko [Beijing University of Chemical Technology, Beijing (China). Department of Automation

    2003-04-01

    Combustion in a boiler is too complex to be analytically described with mathematical models. To meet the needs of operation optimization, on-site experiments guided by the statistical optimization methods are often necessary to achieve the optimum operating conditions. This study proposes a new constrained optimization procedure using artificial neural networks as models for target processes. Information analysis based on random search, fuzzy c-mean clustering, and minimization of information free energy is performed iteratively in the procedure to suggest the location of future experiments, which can greatly reduce the number of experiments needed. The effectiveness of the proposed procedure in searching optima is demonstrated by three case studies: (1) a bench-mark problem, namely minimization of the modified Himmelblau function under a circle constraint; (2) both minimization of NOx and CO emissions and maximization of thermal efficiency for a simulated combustion process of a boiler; (3) maximization of thermal efficiency within NOx and CO emission limits for the same combustion process. The simulated combustion process is based on a commercial software package CHEMKIN, where 78 chemical species and 467 chemical reactions related to the combustion mechanism are incorporated and a plug-flow model and a load-correlated temperature distribution for the combustion tunnel of a boiler are used. 22 refs., 6 figs., 4 tabs.

  14. Constraining H0 in General Dark Energy Models from Sunyaev-Zeldovich/X-ray Technique and Complementary Probes

    CERN Document Server

    Holanda, R F L; Marassi, L; Lima, J A S

    2010-01-01

    In accelerating dark energy models, the estimates of H0 from Sunyaev-Zel'dovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega_M), the curvature (Omega_K) and the equation of state parameter (w). In this article, by using a sample of 25 angular diameter distances from galaxy clusters obtained through SZE/X-ray technique, we constrain H_0 in the framework of a general LCDM models (free curvature) and a flat XCDM model with equation of state parameter, w=p_x/\\rho_x (w=constant). In order to broke the degeneracy on the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BAO) and the CMB Shift Parameter signature. By neglecting systematic uncertainties, for nonflat LCDM cosmologies we obtain $H_0=73.2^{+4.3}_{-3.7}$ km s$^{-1}$ Mpc$^{-1}$ (1sigma) whereas for a flat universe with constant equation of state parameter we find $H_0=71.4^{+4.4}_{-3.4}$ km s$^{-1}$ Mpc$^{-1}$ (1$\\sigma$). Such results are also in good agre...

  15. Constraining parameter space of the little Higgs model using data from tera-Z factory and ILC

    Science.gov (United States)

    Guo, Xing-Dao; Feng, Tai-Fu; Zhao, Shu-Min; Ke, Hong-Wei; Li, Xue-Qian

    2015-02-01

    The Standard Model (SM) prediction on the forward-backward asymmetry for bb¯ production (AbFB)is well consistent with the data of LEP I at the Z-pole, but deviates from the data at √s = 89.55 and 92.95 GeV which are slightly away from the pole. This deviation implies that there is still room for new physics. We calculate the AbFB at the vicinity of the Z-pole in the little Higgs model as well as other measurable parameters such as Rb and Rc, by which we may constrain the parameter space of the little Higgs model. This can be further tested in the newly proposed tera-Z factory. With the fitted parameters we further make predictions on AbFB and AtFB for tt¯ production at the International Linear Collider (ILC). Supported by National Natural Science Foundation of China (11275036, 11047002, 11375128), Fund of Natural Science Foundation of Hebei Province(A2011201118) and Natural Science Fund of Hebei University (2011JQ05, 2007113)

  16. Quantizing the Complexity of the Western United States Fault System with Geodetically and Geologically Constrained Block Models

    Science.gov (United States)

    Evans, E. L.; Meade, B. J.

    2014-12-01

    Geodetic observations of interseismic deformation provide constraints on miroplate rotations, earthquake cycle processes, slip partitioning, and the geometric complexity of the Pacific-North America plate boundary. Paleoseismological observations in the western United States provide a complimentary dataset of Quaternary fault slip rate estimates. These measurements may be integrated and interpreted using block models, in which the upper crust is divided into microplates bounded by mapped faults, with slip rates defined by the differential relative motions of adjacent microplates. The number and geometry of microplates are typically defined with boundaries representing a limited sub-set of the large number of potentially seismogenic faults. An alternative approach is to include large number of potentially active faults in a dense array of microplates, and then deterministically estimate the boundaries at which strain is localized, while simultaneously satisfying interseismic geodetic and geologic observations. This approach is possible through the application of total variation regularization (TVR) which simultaneously minimizes the L2 norm of data residuals and the L1 norm of the variation in the estimated state vector. Applied to three-dimensional spherical block models, TVR reduces the total variation between estimated rotation vectors, creating groups of microplates that rotate together as larger blocks, and localizing fault slip on the boundaries of these larger blocks. Here we consider a suite of block models containing 3-137 microplates, where active block boundaries have been determined by TVR optimization constrained by both interseismic GPS velocities and geologic slip rate estimates.

  17. Estimates of spatially and temporally resolved constrained black carbon emission over the Indian region using a strategic integrated modelling approach

    Science.gov (United States)

    Verma, S.; Reddy, D. Manigopal; Ghosh, S.; Kumar, D. Bharath; Chowdhury, A. Kundu

    2017-10-01

    We estimated the latest spatially and temporally resolved gridded constrained black carbon (BC) emissions over the Indian region using a strategic integrated modelling approach. This was done extracting information on initial bottom-up emissions and atmospheric BC concentration from a general circulation model (GCM) simulation in conjunction with the receptor modelling approach. Monthly BC emission (83-364 Gg) obtained from the present study exhibited a spatial and temporal variability with this being the highest (lowest) during February (July). Monthly BC emission flux was considerably high (> 100 kg km- 2) over the entire Indo-Gangetic plain (IGP), east and the west coast during winter months. This was relatively higher over the central and western India than over the IGP during summer months. Annual BC emission rate was 2534 Gg y- 1 with that over the IGP and central India respectively comprising 50% and 40% of the total annual BC emissions over India. A high relative increase was observed in modified BC emissions (more than five times the initial emissions) over the most part of the IGP, east coast, central/northwestern India. The relative predominance of monthly BC emission flux over a region (as depicted from z-score distribution maps) was inferred being consistent with the prevalence of region- and season-specific anthropogenic activity.

  18. Tetrahedral shapes of neutron-rich Zr isotopes from multidimensionally-constrained relativistic Hartree-Bogoliubov model

    CERN Document Server

    Zhao, Jie; Zhao, En-Guang; Zhou, Shan-Gui

    2016-01-01

    We develop a multidimensionally-constrained relativistic Hartree-Bogoliubov (MDC-RHB) model in which the pairing correlations are taken into account by making the Bogoliubov transformation. In this model, the nuclear shape is assumed to be invariant under the reversion of $x$ and $y$ axes, i.e., the intrinsic symmetry group is $V_4$ and all shape degrees of freedom $\\beta_{\\lambda\\mu}$ with even $\\mu$ are included self-consistently. The RHB equation is solved in an axially deformed harmonic oscillator basis. A separable pairing force of finite range is adopted in the MDC-RHB model. The potential energy curves of neutron-rich even-even Zr isotopes are calculated. The ground state shapes of $^{108-112}$Zr are predicted to be tetrahedral with both functionals DD-PC1 and PC-PK1 and $^{106}$Zr is also predicted to have a tetrahedral ground state with the functional PC-PK1. The tetrahedral ground states are caused by large energy gaps at $Z=40$ and $N=70$ when $\\beta_{32}$ deformation is included. Although the incl...

  19. Total variation regularization of geodetically and geologically constrained block models for the Western United States

    Science.gov (United States)

    Evans, Eileen L.; Loveless, John P.; Meade, Brendan J.

    2015-08-01

    Geodetic observations of interseismic deformation in the Western United States provide constraints on microplate rotations, earthquake cycle processes, and slip partitioning across the Pacific-North America Plate boundary. These measurements may be interpreted using block models, in which the upper crust is divided into microplates bounded by faults that accumulate strain in a first-order approximation of earthquake cycle processes. The number and geometry of microplates are typically defined with boundaries representing a limited subset of the large number of potentially seismogenic faults. An alternative approach is to include a large number of potentially active faults bounding a dense array of microplates, and then algorithmically estimate the boundaries at which strain is localized. This approach is possible through the application of a total variation regularization (TVR) optimization algorithm, which simultaneously minimizes the L2 norm of data residuals and the L1 norm of the variation in the differential block motions. Applied to 3-D spherical block models, the TVR algorithm can be used to reduce the total variation between estimated rotation vectors, effectively grouping microplates that rotate together as larger blocks, and localizing fault slip on the boundaries of these larger block clusters. Here we develop a block model comprised of 137 microplates derived from published fault maps, and apply the TVR algorithm to identify the kinematically most important faults in the western United States. This approach reveals that of the 137 microplates considered, only 30 unique blocks are required to approximate deformation in the western United States at a residual level of <2 mm yr-1.

  20. The Herschel Orion Protostar Survey: Constraining Protostellar Models with Near- to Far-Infrared Observations

    Science.gov (United States)

    Furlan, Elise; Ali, Babar; Fischer, Will; Tobin, John; Stutz, Amy; Megeath, Tom; Allen, Lori; HOPS Team

    2013-07-01

    During the protostellar stage of star formation, a young star is surrounded by a large infalling envelope of dust and gas; the material falls onto a circumstellar disk and is eventually accreted by the central star. The dust in the disk and envelope emits prominently at mid- to far-infrared wavelengths; at 10 micron, absorption by small silicate grains typically causes a broad absorption feature. By modeling the near- to far-IR spectral energy distributions (SEDs) of protostars, properties of their disks and envelopes can be derived. As part of the Herschel Orion Protostar Survey (HOPS; PI: S. T. Megeath), we have observed a large sample of protostars in the Orion star-forming complex at 70 and 160 micron with the PACS instrument on the Herschel Space Observatory. For most objects, we also have photometry in the near-IR (2MASS), mid-IR (Spitzer/ IRAC and MIPS), at 100 micron (PACS data from the Gould Belt Survey), sub-mm (APEX/SABOCA and LABOCA), and mid-infrared spectra (Spitzer/IRS). For the interpretation of the SEDs, we have constructed a large grid of protostellar models using a Monte Carlo radiative transfer code. Here we present our SED fitting techniques to determine the best-fit model for each object. We show the importance of including IRS spectra with appropriate weights, in addition to the constraints provided by the PACS measurements, which probe the peak of the SED. The 10 micron silicate absorption feature and the mid- to far-IR SED slope provide key constraints for the inclination angle of the object and its envelope density, with a deep absorption feature and steep SED slope for the most embedded and highly inclined objects. We show a few examples that illustrate our SED fitting method and present some preliminary results from our fits.

  1. Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population

    Science.gov (United States)

    Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.

    2012-01-01

    With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars

  2. Top squark and neutralino decays in a -parity violating model constrained by neutrino oscillation data

    Indian Academy of Sciences (India)

    Sujoy Poddar

    2007-11-01

    In a -parity violating (RPV) model of neutrino mass with three bilinear couplings and three trilinear couplings ′33, where is the lepton index, we find six generic scenarios each with a distinctive pattern of the trilinear couplings consistent with the neutrino oscillation data. These patterns may be reflected in direct RPV decays of the lighter top squark or in the RPV decays of the lightest superparticle, assumed to be the lightest neutralino. Typical signal sizes at the Tevatron RUN II and the LHC have been estimated and the results turn out to be encouraging.

  3. Constraining ecosystem model with adaptive Metropolis algorithm using boreal forest site eddy covariance measurements

    Science.gov (United States)

    Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula

    2016-12-01

    We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use

  4. Shape and origin of the East-Alpine slab constrained by the ALPASS teleseismic model

    Science.gov (United States)

    Mitterbauer, Ulrike; Behm, Michael; Brückl, Ewald; Lippitsch, Regina; Guterch, Alexander; Keller, G. Randy; Koslovskaya, Elena; Rumpfhuber, Eva-Maria; Šumanovac, Franjo

    2011-09-01

    During the last two decades teleseismic studies yielded valuable information on the structure of the upper mantle below the Alpine-Mediterranean area. Subducted oceanic lithosphere forms a broad anomaly resting on but not penetrating the 670 km discontinuity. More shallow slabs imaged below the Alpine arc are interpreted as subducted continental lower lithosphere. Substantial advances in our understanding of past and active tectonic processes have been achieved due to these results. However, important issues like the polarity of subduction under the Eastern Alps and the slab geometry at the transition to the Pannonian realm are still under debate. The ALPASS teleseismic experiment was designed to address these open questions. Teleseismic waveforms from 80 earthquakes recorded at 75 temporary and 79 permanent stations were collected during 2005 and 2006. From these data, a tomographic image of the upper mantle was generated between 60 km and 500 km depth. Crustal corrections, additional station terms, and ray bending caused by the velocity perturbations were considered. A steeply to vertically dipping "shallow slab" below the Eastern Alps is clearly resolved down to a depth of ~ 250 km. It is interpreted as European lower lithosphere detached from the crust and subducted during post-collision convergence between Adria and Europe. Below the Pannonian realm low velocities or high mantle temperatures prevail down to ~ 300 km depth, consistent with the concept of a Pannonian lithospheric fragment, which underwent strike-slip deformation relative to the European plate and extension during the post-collision phase of the Alpine orogeny. Between 350 km and 400 km depth, a "deep slab" extends from below the central Eastern Alps to under the Pannonian realm. It is interpreted as subducted lithosphere of the Alpine Tethys. At greater depth, there is a continuous transition to the high velocity anomaly above the 670 km discontinuity.

  5. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng

    2015-11-11

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  6. Numerical modeling of the Mount Meager landslide constrained by its force history derived from seismic data

    Science.gov (United States)

    Moretti, L.; Allstadt, K.; Mangeney, A.; Capdeville, Y.; Stutzmann, E.; Bouchut, F.

    2015-04-01

    We focus on the 6 August 2010 Mount Meager landslide that occurred in Southwest British Columbia, Canada. This 48.5 Mm3 rockslide that rapidly changed into a debris flow was recorded by over 25 broadband seismic stations. We showed that the waveform inversion of the seismic signal making it possible to calculate the time history of the force applied by the landslide to the ground is very robust and stable, even when using only data from a single station. By comparing this force with the force calculated through numerical modeling of the landslide, we are able to support the interpretation of seismic data made using a simple block model. However, our study gives different values of the friction coefficients involved and more details about the volumes and orientation of the subevents and the flow trajectory and velocity. Our sensitivity analysis shows that the characteristics of the released mass and the friction coefficients all contribute to the amplitude and the phase of the force. Despite this complexity, our study makes it possible to discriminate the best values of all these parameters. Our results suggest that comparing simulated and inverted forces helps to identify appropriate rheological laws for natural flows. We also show that except for the initial collapse, peaks in the low-frequency force related to bends and runup over topography changes are associated with high-frequency generation, possibly due to an increased agitation of the granular material involved.

  7. Constraining the GENIE model of neutrino-induced single pion production using reanalyzed bubble chamber data

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Philip; McFarland, Kevin [University of Rochester, Department of Physics and Astronomy, Rochester, NY (United States); Wilkinson, Callum [University of Bern, Laboratory for High Energy Physics (LHEP), Albert Einstein Center for Fundamental Physics, Bern (Switzerland)

    2016-08-15

    The longstanding discrepancy between bubble chamber measurements of ν{sub μ}-induced single pion production channels has led to large uncertainties in pion production cross section parameters for many years. We extend the reanalysis of pion production data in deuterium bubble chambers where this discrepancy is solved (Wilkinson et al., PRD 90, 112017 2014) to include the ν{sub μ}n → μ{sup -}pπ{sup 0} and ν{sub μ}n → μ{sup -}nπ{sup +} channels, and use the resulting data to fit the parameters of the GENIE pion production model. We find a set of parameters that can describe the bubble chamber data better than the GENIE default parameters, and provide updated central values and reduced uncertainties for use in neutrino oscillation and cross section analyses which use the GENIE model. We find that GENIE's non-resonant background prediction has to be significantly reduced to fit the data, which may help to explain the recent discrepancies between simulation and data observed by the MINERνA coherent pion and NOνA oscillation analyses. (orig.)

  8. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    Science.gov (United States)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  9. DNA and dispersal models highlight constrained connectivity in a migratory marine megavertebrate

    Science.gov (United States)

    Naro-Maciel, Eugenia; Hart, Kristen M.; Cruciata, Rossana; Putman, Nathan F.

    2016-01-01

    Population structure and spatial distribution are fundamentally important fields within ecology, evolution, and conservation biology. To investigate pan-Atlantic connectivity of globally endangered green turtles (Chelonia mydas) from two National Parks in Florida, USA, we applied a multidisciplinary approach comparing genetic analysis and ocean circulation modeling. The Everglades (EP) is a juvenile feeding ground, whereas the Dry Tortugas (DT) is used for courtship, breeding, and feeding by adults and juveniles. We sequenced two mitochondrial segments from 138 turtles sampled there from 2006-2015, and simulated oceanic transport to estimate their origins. Genetic and ocean connectivity data revealed northwestern Atlantic rookeries as the major natal sources, while southern and eastern Atlantic contributions were negligible. However, specific rookery estimates differed between genetic and ocean transport models. The combined analyses suggest that post-hatchling drift via ocean currents poorly explains the distribution of neritic juveniles and adults, but juvenile natal homing and population history likely play important roles. DT and EP were genetically similar to feeding grounds along the southern US coast, but highly differentiated from most other Atlantic groups. Despite expanded mitogenomic analysis and correspondingly increased ability to detect genetic variation, no significant differentiation between DT and EP, or among years, sexes or stages was observed. This first genetic analysis of a North Atlantic green turtle courtship area provides rare data supporting local movements and male philopatry. The study highlights the applications of multidisciplinary approaches for ecological research and conservation.

  10. Garnet growth interruptions during high- and ultra high-pressure metamorphism constrained by thermodynamic forward models

    Science.gov (United States)

    Konrad-Schmolke, M.; Schildhauer, H.

    2013-12-01

    Growth and chemical composition of garnet in metamorphic rocks excellently reflect thermodynamic as well kinetic properties of the host rock during garnet growth. This valuable information can be extracted from preserved compositional growth zoning patterns in garnet. However, metamorphic rocks often contain multiple garnet generations that commonly develop as corona textures with distinct compositional core-overgrowth features. This circumstance can lead to a misinterpretation of information extracted from such grains if the age- and metamorphic relations between different garnet generations are unclear. Especially garnets from high-pressure (HP) and ultra high-pressure (UHP) rocks often preserve textures that show multiple growth stages reflected in core-overgrowth differences both in main and trace element composition and in the inclusion assemblage. Distinct growth zones often have sharp boundaries with strong compositional gradients and/or inclusion- and trace-element-enriched zones. Such growth patterns indicate episodic garnet growth as well as growth interruptions during the garnet evolution. A quantitative understanding of these distinct growth pulses enables the relationship between reaction path, age determinations in spatially controlled garnet domains or temperature-time constraints to be fully characterised. In this study we apply thermodynamic forward models to simulate garnet growth along a series of HP and UHP P-T paths, representative for subducted oceanic crust. We study garnet growth in different basaltic rock compositions and under different element fractionation scenarios in order to detect path-dependent P-T regions of limited or ceased garnet growth. Modeled data along P-T trajectories involving fractional crystallisation are assembled in P-T diagrams reflecting garnet growth in a changing bulk rock composition. Our models show that in all investigated rock compositions garnet growth along most P-T trajectories is discontinuous, pulse

  11. Maximum Entropy Production vs. Kolmogorov-Sinai Entropy in a Constrained ASEP Model

    Directory of Open Access Journals (Sweden)

    Martin Mihelich

    2014-02-01

    Full Text Available The asymmetric simple exclusion process (ASEP has become a paradigmatic toy-model of a non-equilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the Kolmogorov-Sinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.

  12. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable.

  13. Existence of Dyons in Minimally Gauged Skyrme Model via Constrained Minimization

    CERN Document Server

    Gao, Zhifeng

    2011-01-01

    We prove the existence of electrically and magnetically charged particlelike static solutions, known as dyons, in the minimally gauged Skyrme model developed by Brihaye, Hartmann, and Tchrakian. The solutions are spherically symmetric, depend on two continuous parameters, and carry unit monopole and magnetic charges but continuous Skyrme charge and non-quantized electric charge induced from the 't Hooft electromagnetism. The problem amounts to obtaining a finite-energy critical point of an indefinite action functional, arising from the presence of electricity and the Minkowski spacetime signature. The difficulty with the absence of the Higgs field is overcome by achieving suitable strong convergence and obtaining uniform decay estimates at singular boundary points so that the negative sector of the action functional becomes tractable.

  14. FC-TLBO: fully constrained meta-heuristic algorithm for abundance estimation using linear mixing model

    Indian Academy of Sciences (India)

    OMPRAKASH TEMBHURNE; DEEPTI SHRIMANKAR

    2017-07-01

    A study of abundance estimation has vital importance in spectral unmixing of hyperspectral image. Recently, various methods have been proposed for spectral unmixing to achieve higher performance using an evolutionary approach. However, these methods are based on unconstrained optimisation problems. Theirperformance was also based on proper tuning parameters. We have proposed a new non-parametric algorithm using teaching-learning-based optimisation technique with an inbuilt constraints maintenance mechanism using the linear mixing model. In this approach, the unmixing problem is transformed into a combinatorial optimisation problem by introducing abundance sum to one constraint and abundance non-negative constraint. A comparative analysis of the proposed algorithm is conducted with other two state-of-the-art algorithms.Experimental results in known and unknown environments with varying signal-to-noise ratio on simulated and real hyper spectral data demonstrate that the proposed method outperforms the other methods.

  15. Modeling dark matter subhalos in a constrained galaxy: Global mass and boosted annihilation profiles

    Science.gov (United States)

    Stref, Martin; Lavalle, Julien

    2017-03-01

    The interaction properties of cold dark matter (CDM) particle candidates, such as those of weakly interacting massive particles (WIMPs), generically lead to the structuring of dark matter on scales much smaller than typical galaxies, potentially down to ˜10-10 M⊙ . This clustering translates into a very large population of subhalos in galaxies and affects the predictions for direct and indirect dark matter searches (gamma rays and antimatter cosmic rays). In this paper, we elaborate on previous analytic works to model the Galactic subhalo population, while keeping consistent with current observational dynamical constraints on the Milky Way. In particular, we propose a self-consistent method to account for tidal effects induced by both dark matter and baryons. Our model does not strongly rely on cosmological simulations, as they can hardly be fully matched to the real Milky Way, apart from setting the initial subhalo mass fraction. Still, it allows us to recover the main qualitative features of simulated systems. It can further be easily adapted to any change in the dynamical constraints, and can be used to make predictions or derive constraints on dark matter candidates from indirect or direct searches. We compute the annihilation boost factor, including the subhalo-halo cross product. We confirm that tidal effects induced by the baryonic components of the Galaxy play a very important role, resulting in a local average subhalo mass density ≲1 % of the total local dark matter mass density, while selecting the most concentrated objects and leading to interesting features in the overall annihilation profile in the case of a sharp subhalo mass function. Values of global annihilation boost factors range from ˜2 to ˜20 , while the local annihilation rate is about boosted half as much.

  16. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.;

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes......, the results are compatible with the data and, at the same time, favor sharp transitions. The focusing strategy can also be used to constrain the 1D solutions laterally, guaranteeing that lateral sharp transitions are retrieved without losing resolution. By means of real and synthetic datasets, sharp...

  17. Mrk 421 active state in 2008: the MAGIC view, simultaneous multi-wavelength observations and SSC model constrained

    Science.gov (United States)

    Aleksić, J.; Alvarez, E. A.; Antonelli, L. A.; Antoranz, P.; Asensio, M.; Backes, M.; Barrio, J. A.; Bastieri, D.; Becerra González, J.; Bednarek, W.; Berdyugin, A.; Berger, K.; Bernardini, E.; Biland, A.; Blanch, O.; Bock, R. K.; Boller, A.; Bonnoli, G.; Borla Tridon, D.; Braun, I.; Bretz, T.; Cañellas, A.; Carmona, E.; Carosi, A.; Colin, P.; Colombo, E.; Contreras, J. L.; Cortina, J.; Cossio, L.; Covino, S.; Dazzi, F.; De Angelis, A.; De Caneva, G.; De Cea del Pozo, E.; De Lotto, B.; Delgado Mendez, C.; Diago Ortega, A.; Doert, M.; Domínguez, A.; Dominis Prester, D.; Dorner, D.; Doro, M.; Elsaesser, D.; Ferenc, D.; Fonseca, M. V.; Font, L.; Fruck, C.; García López, R. J.; Garczarczyk, M.; Garrido, D.; Giavitto, G.; Godinović, N.; Hadasch, D.; Häfner, D.; Herrero, A.; Hildebrand, D.; Höhne-Mönch, D.; Hose, J.; Hrupec, D.; Huber, B.; Jogler, T.; Kellermann, H.; Klepser, S.; Krähenbühl, T.; Krause, J.; La Barbera, A.; Lelas, D.; Leonardo, E.; Lindfors, E.; Lombardi, S.; López, A.; López, M.; Lorenz, E.; Makariev, M.; Maneva, G.; Mankuzhiyil, N.; Mannheim, K.; Maraschi, L.; Mariotti, M.; Martínez, M.; Mazin, D.; Meucci, M.; Miranda, J. M.; Mirzoyan, R.; Miyamoto, H.; Moldón, J.; Moralejo, A.; Munar-Adrover, P.; Nieto, D.; Nilsson, K.; Orito, R.; Oya, I.; Paneque, D.; Paoletti, R.; Pardo, S.; Paredes, J. M.; Partini, S.; Pasanen, M.; Pauss, F.; Perez-Torres, M. A.; Persic, M.; Peruzzo, L.; Pilia, M.; Pochon, J.; Prada, F.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Reichardt, I.; Reinthal, R.; Rhode, W.; Ribó, M.; Rico, J.; Rügamer, S.; Saggion, A.; Saito, K.; Saito, T. Y.; Salvati, M.; Satalecka, K.; Scalzotto, V.; Scapin, V.; Schultz, C.; Schweizer, T.; Shayduk, M.; Shore, S. N.; Sillanpää, A.; Sitarek, J.; Sobczynska, D.; Spanier, F.; Spiro, S.; Stamerra, A.; Steinke, B.; Storz, J.; Strah, N.; Surić, T.; Takalo, L.; Takami, H.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Tescaro, D.; Teshima, M.; Tibolla, O.; Torres, D. F.; Treves, A.; Uellenbeck, M.; Vankov, H.; Vogler, P.; Wagner, R. M.; Weitzel, Q.; Zabalza, V.; Zandanel, F.; Zanin, R.

    2012-06-01

    Context. The blazar Markarian 421 is one of the brightest TeV gamma-ray sources of the northern sky. From December 2007 until June 2008 it was intensively observed in the very high energy (VHE, E > 100 GeV) band by the single-dish Major Atmospheric Gamma-ray Imaging Cherenkov telescope (MAGIC-I). Aims: We aimed to measure the physical parameters of the emitting region of the blazar jet during active states. Methods: We performed a dense monitoring of the source in VHE with MAGIC-I, and also collected complementary data in soft X-rays and optical-UV bands; then, we modeled the spectral energy distributions (SED) derived from simultaneous multi-wavelength data within the synchrotron self-Compton (SSC) framework. Results: The source showed intense and prolonged γ-ray activity during the whole period, with integral fluxes (E > 200 GeV) seldom below the level of the Crab Nebula, and up to 3.6 times this value. Eight datasets of simultaneous optical-UV (KVA, Swift/UVOT), soft X-ray (Swift/XRT) and MAGIC-I VHE data were obtained during different outburst phases. The data constrain the physical parameters of the jet, once the spectral energy distributions obtained are interpreted within the framework of a single-zone SSC leptonic model. Conclusions: The main outcome of the study is that within the homogeneous model high Doppler factors (40 ≤ δ ≤ 80) are needed to reproduce the observed SED; but this model cannot explain the observed short time-scale variability, while it can be argued that inhomogeneous models could allow for less extreme Doppler factors, more intense magnetic fields and shorter electron cooling times compatible with hour or sub-hour scale variability.

  18. 3Es System Optimization under Uncertainty Using Hybrid Intelligent Algorithm: A Fuzzy Chance-Constrained Programming Model

    Directory of Open Access Journals (Sweden)

    Jiekun Song

    2016-01-01

    Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.

  19. Constraining the fraction of Compton-thick AGN in the Universe by modelling the diffuse X-ray background spectrum

    CERN Document Server

    Akylas, A; Georgantopoulos, I; Brightman, M; Nandra, K

    2012-01-01

    This paper investigates what constraints can be placed on the fraction of Compton-thick (CT) AGN in the Universe from the modeling of the spectrum of the diffuse X-ray background (XRB). We present a model for the synthesis of the XRB that uses as input a library of AGN X-ray spectra generated by the Monte Carlo simulations described by Brightman & Nandra. This is essential to account for the Compton scattering of X-ray photons in a dense medium and the impact of that process on the spectra of obscured AGN. We identify a small number of input parameters to the XRB synthesis code which encapsulate the minimum level of uncertainty in reconstructing the XRB spectrum. These are the power-law index and high energy cutoff of the intrinsic X-ray spectra of AGN, the level of the reflection component in AGN spectra and the fraction of CT AGN in the Universe. We then map the volume of the space allowed to these parameters by current observations of the XRB spectrum in the range 3-100 keV. One of the least constraine...

  20. Constraining the Absolute Orientation of Eta Carinae's Binary Orbit: A 3-D Dynamical Model for the Broad [Fe III] Emission

    CERN Document Server

    Madura, Thomas I; Owocki, Stanley P; Groh, Jose H; Okazaki, Atsuo T; Russell, Christopher M P

    2011-01-01

    We present a three-dimensional (3-D) dynamical model for the broad [Fe III] emission observed in Eta Carinae using the Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS). This model is based on full 3-D Smoothed Particle Hydrodynamics (SPH) simulations of Eta Car's binary colliding winds. Radiative transfer codes are used to generate synthetic spectro-images of [Fe III] emission line structures at various observed orbital phases and STIS slit position angles (PAs). Through a parameter study that varies the orbital inclination i, the PA {\\theta} that the orbital plane projection of the line-of-sight makes with the apastron side of the semi-major axis, and the PA on the sky of the orbital axis, we are able, for the first time, to tightly constrain the absolute 3-D orientation of the binary orbit. To simultaneously reproduce the blue-shifted emission arcs observed at orbital phase 0.976, STIS slit PA = +38 degrees, and the temporal variations in emission seen at negative slit PAs, the binary ...

  1. Constraining deflagration models of Type Ia supernovae through intermediate-mass elements

    CERN Document Server

    García-Senz, D; Cabezon, R M; Woosley, S E

    2006-01-01

    The physical structure of a nuclear flame is a basic ingredient of the theory of Type Ia supernovae (SNIa). Assuming an exponential density reduction with several characteristic times we have followed the evolution of a planar nuclear flame in an expanding background from an initial density 6.6 10^7 g/cm3 down to 2 10^6 g/cm3. The total amount of synthesized intermediate-mass elements (IME), from silicon to calcium, was monitored during the calculation. We have made use of the computed mass fractions, X_IME, of these elements to give an estimation of the total amount of IME synthesized during the deflagration of a massive white dwarf. Using X_IME and adopting the usual hypothesis that turbulence decouples the effective burning velocity from the laminar flame speed, so that the relevant flame speed is actually the turbulent speed on the integral length-scale, we have built a simple geometrical approach to model the region where IME are thought to be produced. It turns out that a healthy production of IME invol...

  2. Proton Decay and Cosmology Strongly Constrain the Minimal SU(5) Supergravity Model

    CERN Document Server

    Lopez, Jorge L.; Pois, H.

    1993-01-01

    We present the results of an extensive exploration of the five-dimensional parameter space of the minimal $SU(5)$ supergravity model, including the constraints of a long enough proton lifetime ($\\tau_p>1\\times10^{32}\\y$) and a small enough neutralino cosmological relic density ($\\Omega_\\chi h^2_0\\le1$). We find that the combined effect of these two constraints is quite severe, although still leaving a small region of parameter space with $m_{\\tilde g,\\tilde q}<1\\TeV$. The allowed values of the proton lifetime extend up to $\\tau_p\\approx1\\times10^{33}\\y$ and should be fully explored by the SuperKamiokande experiment. The proton lifetime cut also entails the following mass correlations and bounds: $m_h\\lsim100\\GeV$, $m_\\chi\\approx{1\\over2}m_{\\chi^0_2}\\approx0.15\\gluino$, $m_{\\chi^0_2}\\approx m_{\\chi^+_1}$, and $m_\\chi<85\\,(115)\\GeV$, $m_{\\chi^0_2,\\chi^+_1}<165\\,(225)\\GeV$ for $\\alpha_3=0.113\\,(0.120)$. Finally, the {\\it combined} proton decay and cosmology constraints predict that if $m_h\\gsim75\\,(80)\\...

  3. Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios

    Science.gov (United States)

    Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.

    2015-12-01

    Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.

  4. Modeling and stabilization results for a charge or current-actuated active constrained layer (ACL) beam model with the electrostatic assumption

    Science.gov (United States)

    Özer, Ahmet Özkan

    2016-04-01

    An infinite dimensional model for a three-layer active constrained layer (ACL) beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clamped-free boundary conditions by using a thorough variational approach. The Rao-Nakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach (magnetic effects are ignored) is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge (or current) source. We show that, the closed-loop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.

  5. Constraining the Source of Curiosity's Methane Detections Using the Mars Regional Atmospheric Modeling System (MRAMS)

    Science.gov (United States)

    Pla-Garcia, Jorge; Rafkin, Scot C. R.; MSL Team, SAM Team, REMS Team

    2016-10-01

    The putative in situ detection of methane by SAM instrument has garnered significant attention. There are many major unresolved questions regarding this detection: 1) Where is the release location? 2) How spatially extensive is the release? 3) For how long is CH4 released? In an effort to better address the potential mixing and remaining questions, atmospheric circulation studies of Gale Crater were performed with MRAMS mesoscale model, ideally suited for this investigation using tracer fields to simulate the transport of CH4 and to understand the mixing of air inside and outside the crater throughout the Martian year. The simulated tracer abundances are compared to gas abundances measured by SAM. Ls270 was shown to be an anomalous season when air within and outside the crater was well mixed by strong, flushing, northerly flow and large amplitude breaking mountain waves: air flowing downslope at night is cold enough to penetrate all the way to the surface. At other seasons, the air in the crater is more isolated from the surrounding environment: the air flowing down the crater rims does not easily make it to the crater floor. Instead, the air encounters very cold and stable air pooled in the bottom of the crater, which forces the air to glide right over the colder, more dense air below. Thus, the mixing of near surface crater air with the external environment is potentially more limited at seasons other than around Ls270. The rise in CH4 concentration was reported to start around Ls336, peaked shortly after Ls82, and then dropped to background values prior to Ls103. Two scenarios are considered in the context of the circulations predicted by MRAMS. The first scenario is the release of CH4 from somewhere outside the crater. The second is a release of CH4 within the crater. In both cases, the release is assumed to take place near the season when the rise of concentration was first noted Ls336. This is a transitional time at Gale, when the flushing winds are giving

  6. Constraining the heat flux between Enceladus’ tiger stripes: numerical modeling of funiscular plains formation

    Science.gov (United States)

    Bland, Michael; McKinnon, William B; Schenk, Paul M.

    2015-01-01

    The Cassini spacecraft’s Composite Infrared Spectrometer (CIRS) has observed at least 5 GW of thermal emission at Enceladus’ south pole. The vast majority of this emission is localized on the four long, parallel, evenly-spaced fractures dubbed tiger stripes. However, the thermal emission from regions between the tiger stripes has not been determined. These spatially localized regions have a unique morphology consisting of short-wavelength (∼1 km) ridges and troughs with topographic amplitudes of ∼100 m, and a generally ropy appearance that has led to them being referred to as “funiscular terrain.” Previous analysis pursued the hypothesis that the funiscular terrain formed via thin-skinned folding, analogous to that occurring on a pahoehoe flow top (Barr, A.C., Preuss, L.J. [2010]. Icarus 208, 499–503). Here we use finite element modeling of lithospheric shortening to further explore this hypothesis. Our best-case simulations reproduce funiscular-like morphologies, although our simulated fold wavelengths after 10% shortening are 30% longer than those observed. Reproducing short-wavelength folds requires high effective surface temperatures (∼185 K), an ice lithosphere (or high-viscosity layer) with a low thermal conductivity (one-half to one-third that of intact ice or lower), and very high heat fluxes (perhaps as great as 400 mW m−2). These conditions are driven by the requirement that the high-viscosity layer remain extremely thin (≲200 m). Whereas the required conditions are extreme, they can be met if a layer of fine grained plume material 1–10 m thick, or a highly fractured ice layer >50 m thick insulates the surface, and the lithosphere is fractured throughout as well. The source of the necessary heat flux (a factor of two greater than previous estimates) is less obvious. We also present evidence for an unusual color/spectral character of the ropy terrain, possibly related to its unique surface texture. Our simulations demonstrate

  7. Geologic modeling constrained by seismic and dynamical data; Modelisation geologique contrainte par les donnees sismiques et dynamiques

    Energy Technology Data Exchange (ETDEWEB)

    Pianelo, L.

    2001-09-01

    Matching procedures are often used in reservoir production to improve geological models. In reservoir engineering, history matching leads to update petrophysical parameters in fluid flow simulators to fit the results of the calculations with observed data. In the same line, seismic parameters are inverted to allow the numerical recovery of seismic acquisitions. However, it is well known that these inverse problems are poorly constrained. The idea of this original work is to simultaneous match both the permeability and the acoustic impedance of the reservoir, for an enhancement of the resulting geological model. To do so, both parameters are linked using either observed relations and/or the classic Wyllie (porosity impedance) and Carman-Kozeny (porosity-permeability) relationships. Hence production data are added to the seismic match, and seismic observations are used for the permeability recovery. The work consists in developing numerical prototypes of a 3-D fluid flow simulator and a 3-D seismic acquisition simulator. Then, in implementing the coupled inversion loop of the permeability and the acoustic impedance of the two models. We can hence test our theory on a 3-D realistic case. Comparison of the coupled matching with the two classical ones demonstrates the efficiency of our method. We reduce significantly the number of possible solutions, and then the number of scenarios. In addition to that, the augmentation of information leads to a natural improvement of the obtained models, especially in the spatial localization of the permeability contrasts. The improvement is significant, at the same time in the distribution of the two inverted parameters, and in the rapidity of the operation. This work is an important step in a way of data integration, and leads to a better reservoir characterization. This original algorithm could also be useful in reservoir monitoring, history matching and in optimization of production. This new and original method is patented and

  8. Introducing Variable-Step Topography (VST) coordinates within dynamically constrained Nonhydrostatic Modeling System (NMS). Part 1: VST formulation within NMS host model framework

    Science.gov (United States)

    Tripoli, Gregory J.; Smith, Eric A.

    2014-06-01

    A Variable-Step Topography (VST) surface coordinate system is introduced into a dynamically constrained, scalable, nonhydrostatic atmospheric model for reliable simulations of flows over both smooth and steep terrain without sacrificing dynamical integrity over either type of surface. Backgrounds of both terrain-following and step coordinate model developments are presented before justifying the turn to a VST approach within an appropriately configured host model. In this first part of a two-part sequence of papers, the full formulation of the VST model, prefaced by a description of the framework of its apposite host, i.e., a re-tooled Nonhydrostatic Modeling System (NMS), are presented. [The second part assesses the performance and benefits of the new VST coordinate system in conjunction with seven orthodox obstacle flow problems.] The NMS is a 3-dimensional, nonhydrostatic cloud-mesoscale model, designed for integrations from plume-cloud scales out to regional-global scales. The derivative properties of VST in conjunction with the NMS's newly designed dynamically constrained core are capable of accurately capturing the deformations of flows by any type of terrain variability. Numerical differencing schemes needed to satisfy critical integral constraints, while also effectively enabling the VST lower boundary, are described. The host model constraints include mass, momentum, energy, vorticity and enstrophy conservation. A quasi-compressible closure cast on multiple-nest rotated spherical grids is the underlying framework used to study the advantages of the VST coordinate system. The principle objective behind the VST formulation is to combine the advantages of both terrain-following and step coordinate systems without suffering either of their disadvantages, while at the same time creating a vertical surface coordinate setting suitable for a scalable, nonhydrostatic model, safeguarded with physically realistic dynamical constraints.

  9. A best-practice model for term planning

    OpenAIRE

    Bhreathnach, Úna

    2011-01-01

    This thesis presents a best-practice model for term planning for a language, based on the literature and on three qualitative case studies: TERMCAT (the term planning organisation for Catalan), Terminologicentrum TNC (the term planning organisation for Swedish) and the Irish-language term planning organisations, principally the Terminology Committee (Foras na Gaeilge) and Fiontar, DCU. Although the literature on the subject is underdeveloped, and a complete model cannot be derived from it,...

  10. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    Science.gov (United States)

    Xiao, Di; Dai, Zi-Gao

    2017-09-01

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  11. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    Science.gov (United States)

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  12. Constraining the Absolute Orientation of eta Carinae's Binary Orbit: A 3-D Dynamical Model for the Broad [Fe III] Emission

    Science.gov (United States)

    Madura, T. I.; Gull, T. R.; Owocki, S. P.; Groh, J. H.; Okazaki, A. T.; Russell, C. M. P.

    2011-01-01

    We present a three-dimensional (3-D) dynamical model for the broad [Fe III] emission observed in Eta Carinae using the Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS). This model is based on full 3-D Smoothed Particle Hydrodynamics (SPH) simulations of Eta Car's binary colliding winds. Radiative transfer codes are used to generate synthetic spectro-images of [Fe III] emission line structures at various observed orbital phases and STIS slit position angles (PAs). Through a parameter study that varies the orbital inclination i, the PA(theta) that the orbital plane projection of the line-of-sight makes with the apastron side of the semi-major axis, and the PA on the sky of the orbital axis, we are able, for the first time, to tightly constrain the absolute 3-D orientation of the binary orbit. To simultaneously reproduce the blue-shifted emission arcs observed at orbital phase 0.976, STIS slit PA = +38deg, and the temporal variations in emission seen at negative slit PAs, the binary needs to have an i approx. = 130deg to 145deg, Theta approx. = -15deg to +30deg, and an orbital axis projected on the sky at a P A approx. = 302deg to 327deg east of north. This represents a system with an orbital axis that is closely aligned with the inferred polar axis of the Homunculus nebula, in 3-D. The companion star, Eta(sub B), thus orbits clockwise on the sky and is on the observer's side of the system at apastron. This orientation has important implications for theories for the formation of the Homunculus and helps lay the groundwork for orbital modeling to determine the stellar masses.

  13. Chance-constrained/stochastic linear programming model for acid rain abatement—I. Complete colinearity and noncolinearity

    Science.gov (United States)

    Ellis, J. H.; McBean, E. A.; Farquhar, G. J.

    A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO 2 emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty (LPUU), Chance-Constrained Programming (CCP) and Stochastic Linear Programming (SLP). A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. Complete colinearity assumes complete dependence between transfer coefficients. Complete noncolinearity assumes complete independence. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.

  14. Resolving electrolayers from VES: A contribution from modeling the electrical response of a tightly constrained alluvial stratigraphy

    Science.gov (United States)

    Mele, M.; Ceresa, N.; Bersezio, R.; Giudici, M.; Inzoli, S.; Cavalli, E.

    2015-08-01

    The reliability of the hydrostratigraphic interpretation of electrostratigraphy derived from ground based, Direct Current resistivity methods is analyzed through the forward modeling of synthetically derived electrostratigraphic layering in a tightly constrained alluvial framework. To this purpose, a high-resolution stratigraphic model of the horizontally-stratified, alluvial aquifers hosted by the Quaternary regressive cycle of the Po plain in Lombardy was elaborated for a small area (1 ha) by correlation of borehole lithostratigraphic data down to 160 m below the ground surface. The stratigraphic model was used to compute 1-D synthetic electrostratigraphy based on the petrophysical relationship linking the bulk electrical resistivity of porous sediments to the coarse-to-fine litho-textural ratio and to the average pore-water electrical conductivity. A synthetic apparent resistivity curve was computed for the 1-D synthetic electrostratigraphy and for a traditional Vertical Electrical Sounding with Schlumberger array and a maximum dipole separation of 300 m. A good agreement was observed with the experimental apparent resistivity curve obtained with a Vertical Electrical Sounding collected in the study area. The comparison of the 1-D synthetic electrostratigraphy with the results obtained by inversion of the experimental data with the linear-digital filter method, under the assumption of electrically homogeneous layers and no lateral resistivity transition, was used to estimate the hydrostratigraphic resolving power of ground-based resistivity data at various depths. Stratigraphic units of different hierarchic orders can be resolved by Direct Current methods at different depths and at different sites. In this specific case study, Vertical Electrical Sounding resolution was comparable to the hierarchy of the genetic depositional systems, corresponding to the rank of the hydrostratigraphic systems.

  15. ICCLP: an inexact chance-constrained linear programming model for land-use management of lake areas in urban fringes.

    Science.gov (United States)

    Liu, Yong; Qin, Xiaosheng; Guo, Huaicheng; Zhou, Feng; Wang, Jinfeng; Lv, Xiaojian; Mao, Guozhu

    2007-12-01

    Lake areas in urban fringes are under increasing urbanization pressure. Consequently, the conflict between rapid urban development and the maintenance of water bodies in such areas urgently needs to be addressed. An inexact chance-constrained linear programming (ICCLP) model for optimal land-use management of lake areas in urban fringes was developed. The ICCLP model was based on land-use suitability assessment and land evaluation. The maximum net economic benefit (NEB) was selected as the objective of land-use allocation. The total environmental capacity (TEC) of water systems and the public financial investment (PFI) at different probability levels were considered key constraints. Other constraints included in the model were land-use suitability, governmental requirements on the ratios of various land-use types, and technical constraints. A case study implementing the system was performed for the lake area of Hanyang at the urban fringe of Wuhan, central China, based on our previous study on land-use suitability assessment. The Hanyang lake area is under significant urbanization pressure. A 15-year optimal model for land-use allocation is proposed during 2006 to 2020 to better protect the water system and to gain the maximum benefits of development. Sixteen constraints were set for the optimal model. The model results indicated that NEB was between $1.48 x 10(9) and $8.76 x 10(9) or between $3.98 x 10(9) and $16.7 x 10(9), depending on the different urban-expansion patterns and land demands. The changes in total developed area and the land-use structure were analyzed under different probabilities (q ( i )) of TEC. Changes in q ( i ) resulted in different urban expansion patterns and demands on land, which were the direct result of the constraints imposed by TEC and PFI. The ICCLP model might help local authorities better understand and address complex land-use systems and develop optimal land-use management strategies that better balance urban expansion and

  16. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model – Part 1: Assessing the influence of constrained multi-generational ageing

    Directory of Open Access Journals (Sweden)

    S. H. Jathar

    2015-09-01

    Full Text Available Multi-generational oxidation of volatile organic compound (VOC oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1 consider only functionalization reactions but do not consider fragmentation reactions, (2 have not been constrained to experimental data; and (3 are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the Statistical Oxidation Model (SOM of Cappa and Wilson (2012, constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional UCD/CIT air quality model and applied to air quality episodes in California and the eastern US. The mass, composition and properties of SOA predicted using SOM are compared to SOA predictions generated by a traditional "two-product" model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation. Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields

  17. Utility Constrained Energy Minimization In Aloha Networks

    CERN Document Server

    Khodaian, Amir Mahdi; Talebi, Mohammad S

    2010-01-01

    In this paper we consider the issue of energy efficiency in random access networks and show that optimizing transmission probabilities of nodes can enhance network performance in terms of energy consumption and fairness. First, we propose a heuristic power control method that improves throughput, and then we model the Utility Constrained Energy Minimization (UCEM) problem in which the utility constraint takes into account single and multi node performance. UCEM is modeled as a convex optimization problem and Sequential Quadratic Programming (SQP) is used to find optimal transmission probabilities. Numerical results show that our method can achieve fairness, reduce energy consumption and enhance lifetime of such networks.

  18. Relative information contributions of model vs. data to short- and long-term forecasts of forest carbon dynamics.

    Science.gov (United States)

    Weng, Ensheng; Luo, Yiqi

    2011-07-01

    Biogeochemical models have been used to evaluate long-term ecosystem responses to global change on decadal and century time scales. Recently, data assimilation has been applied to improve these models for ecological forecasting. It is not clear what the relative information contributions of model (structure and parameters) vs. data are to constraints of short- and long-term forecasting. In this study, we assimilated eight sets of 10-year data (foliage, woody, and fine root biomass, litter fall, forest floor carbon [C], microbial C, soil C, and soil respiration) collected from Duke Forest into a Terrestrial Ecosystem model (TECO). The relative information contribution was measured by Shannon information index calculated from probability density functions (PDFs) of carbon pool sizes. The null knowledge without a model or data was defined by the uniform PDF within a prior range. The relative model contribution was information content in the PDF of modeled carbon pools minus that in the uniform PDF, while the relative data contribution was the information content in the PDF of modeled carbon pools after data was assimilated minus that before data assimilation. Our results showed that the information contribution of the model to constrain carbon dynamics increased with time whereas the data contribution declined. The eight data sets contributed more than the model to constrain C dynamics in foliage and fine root pools over the 100-year forecasts. The model, however, contributed more than the data sets to constrain the litter, fast soil organic matter (SOM), and passive SOM pools. For the two major C pools, woody biomass and slow SOM, the model contributed less information in the first few decades and then more in the following decades than the data. Knowledge of relative information contributions of model vs. data is useful for model development, uncertainty analysis, future data collection, and evaluation of ecological forecasting.

  19. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    E. Emili

    2013-08-01

    Full Text Available Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. In this study we examine the impact of assimilating ozone observations from the Microwave Limb Sounder (MLS and the Infrared Atmospheric Sounding Interferometer (IASI in a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE. The assimilation of the two instruments is performed by means of a variational algorithm (4-D-VAR and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozone-sondes measurements to verify the presence of observations and model biases. It is found that the IASI Tropospheric Ozone Column (TOC, 1000–225 hPa should be bias-corrected prior to assimilation and MLS lowermost level (215 hPa excluded from the analysis. Furthermore, a longer analysis of 6 months (July–August 2008 showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (Root Mean Square Error, RMSE of the modeled ozone columns from 30% to 15% in the Upper-Troposphere/Lower-Stratosphere (UTLS, 70–225 hPa and from 25% to 20% in the free troposphere. The positive effect of assimilating IASI tropospheric observations is very significant at low latitudes (30° S–30° N, whereas it is not demonstrated at higher latitudes. Results are confirmed by a comparison with additional ozone datasets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC data, the Ozone Monitoring Instrument (OMI total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be little sensitive to the assimilation parameters and the model chemical scheme, due to the high frequency of satellite observations compared to the average life-time of free

  20. Modelled long term trends of surface ozone over South Africa

    CSIR Research Space (South Africa)

    Naidoo, M

    2011-10-01

    Full Text Available timescale seeks to provide a spatially comprehensive view of trends while also creating a baseline for comparisons with future projections of air quality through the forcing of air quality models with modelled predicted long term meteorology. Previous...

  1. Small-angle scattering from phospholipid nanodiscs: derivation and refinement of a molecular constrained analytical model form factor.

    Science.gov (United States)

    Skar-Gislinge, Nicholas; Arleth, Lise

    2011-02-28

    Nanodiscs™ consist of small phospholipid bilayer discs surrounded and stabilized by amphiphilic protein belts. Nanodiscs and their confinement and stabilization of nanometer sized pieces of phospholipid bilayer are highly interesting from a membrane physics point of view. We demonstrate how the detailed structure of Di-Lauroyl-Phosphatidyl Choline (DLPC) nanodiscs may be determined by simultaneous fitting of a structural model to small-angle scattering data from the nanodiscs as investigated in three different contrast situations, respectively two SANS contrasts and one SAXS contrast. The article gives a detailed account of the underlying structural model for the nanodiscs and describe how additional chemical and biophysical information can be incorporated in the model in terms of molecular constraints. We discuss and quantify the contribution from the different elements of the structural model and provide very strong experimental support for the nanodiscs as having an elliptical cross-section and with poly-histidine tags protruding out from the rim of the protein belt. The analysis also provides unprecedented information about the structural conformation of the phospholipids when these are localized in the nanodiscs. The model paves the first part of the way in order to reach our long term goal of using the nanodiscs as a platform for small-angle scattering based structural investigations of membrane proteins in solution.

  2. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, MA; Trimboli, MS

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.

  3. Long Term Modelling of Permafrost Dynamics

    Science.gov (United States)

    1994-07-01

    REFERENCES Albert, M.R. (1984). Modelling two-dimensional freezing using transfinite mappings and a moving mesh finite element technique. Int. J. Numer... transfinite mappings’. Int. J. Numer. Methods Eng., 23, 591- 607. Burt, T.P and P.J. Williams (1976). ’Hydraulic conductivity in frozen soils.’ Earth Surf

  4. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    differences. We develop some properties of this type of model and show that in several cases the change from an additive to a multiplicative formulation, maintaining a specification of V, may lead to a large improvement in fit, sometimes larger than that gained from introducing random coefficients in V....

  5. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    Science.gov (United States)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  6. Constrained Sparse Galerkin Regression

    CERN Document Server

    Loiseau, Jean-Christophe

    2016-01-01

    In this work, we demonstrate the use of sparse regression techniques from machine learning to identify nonlinear low-order models of a fluid system purely from measurement data. In particular, we extend the sparse identification of nonlinear dynamics (SINDy) algorithm to enforce physical constraints in the regression, leading to energy conservation. The resulting models are closely related to Galerkin projection models, but the present method does not require the use of a full-order or high-fidelity Navier-Stokes solver to project onto basis modes. Instead, the most parsimonious nonlinear model is determined that is consistent with observed measurement data and satisfies necessary constraints. The constrained Galerkin regression algorithm is implemented on the fluid flow past a circular cylinder, demonstrating the ability to accurately construct models from data.

  7. A long-term/short-term model for daily electricity prices with dynamic volatility

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, Stephan

    2010-09-15

    In this paper we introduce a new stochastic long-term/short-term model for short-term electricity prices, and apply it to four major European indices, namely to the German, Dutch, UK and Nordic one. We give evidence that all time series contain certain periodic (mostly annual) patterns, and show how to use the wavelet transform, a tool of multiresolution analysis, for filtering purpose. The wavelet transform is also applied to separate the long-term trend from the short-term oscillation in the seasonal-adjusted log-prices. In all time series we find evidence for dynamic volatility, which we incorporate by using a bivariate GARCH model with constant correlation. Eventually we fit various models from the existing literature to the data, and come to the conclusion that our approach performs best. For the error distribution, the Normal Inverse Gaussian distribution shows the best fit. (author)

  8. Examining the Nelson-Siegel Class of Term Structure Models

    NARCIS (Netherlands)

    M.D. de Pooter (Michiel)

    2007-01-01

    textabstractIn this paper I examine various extensions of the Nelson and Siegel (1987) model with the purpose of fitting and forecasting the term structure of interest rates. As expected, I find that using more flexible models leads to a better in-sample fit of the term structure. However, I show th

  9. Term Structure Models with Parallel and Proportional Shifts

    DEFF Research Database (Denmark)

    Armerin, Frederik; Björk, Tomas; Astrup Jensen, Bjarne

    this general framework we show that there does indeed exist a large variety of nontrivial parallel shift term structure models, and we also describe these in detail. We also show that there exists no nontrivial flat term structure model. The same analysis is repeated for the similar case, where the yield curve...

  10. Caribbean sclerosponge radiocarbon measurements re-interpreted in terms of U/Th age models

    Energy Technology Data Exchange (ETDEWEB)

    Rosenheim, Brad E. [Woods Hole Oceanographic Institution, Department of Geology and Geophysics, MS 8, Woods Hole, MA 02546 (United States)]. E-mail: brosenheim@whoi.edu; Swart, Peter K. [University of Miami, Rosenstiel School of Marine and Atmospheric Science, Division of Marine Geology and Geophysics, Miami, FL (United States)

    2007-06-15

    Previously unpublished AMS radiocarbon measurements of a sclerosponge from tongue of the ocean (TOTO), Bahamas, as well as preliminary data from an investigation of the radiocarbon records of sclerosponges living at different depths in the adjacent Bahamas basin, Exuma Sound, are interpreted in terms of U-series age models. The data are compared to an existing Caribbean sclerosponge radiocarbon bomb curve measured using standard gas proportional beta counting and used to interpret a {sup 210}Pb age model. The {delta}{sup 14}C records from the sclerosponges illustrate a potential for use of radiocarbon both as a tracer of subsurface water masses or as an additional age constraint on recently sampled sclerosponges. By using an independent age model, this study lays the framework for utilizing sclerosponges from different locations in the tropics and subtropics and different depths within their wide depth range (0-250 m) to constrain changes in production of subtropical underwater in the Atlantic Ocean. This framework is significant because the proxy approach is necessary to supplement the short and coarse time series being used to constrain variability in the formation of Caribbean subtropical underwater, the return flow of a shallow circulation cell responsible for nearly 10% of the heat transported poleward in the N. Atlantic.

  11. Source Term Model for an Array of Vortex Generator Vanes

    Science.gov (United States)

    Buning, P. G. (Technical Monitor); Waithe, Kenrick A.

    2003-01-01

    A source term model was developed for numerical simulations of an array of vortex generators. The source term models the side force created by a vortex generator being modeled. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on a local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low-profile vortex generator vane, which is only a fraction of the boundary layer thickness, over a flat plate. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data.

  12. Improving model prediction reliability through enhanced representation of wetland soil processes and constrained model auto calibration - A paired watershed study

    Science.gov (United States)

    Sharifi, Amirreza; Lang, Megan W.; McCarty, Gregory W.; Sadeghi, Ali M.; Lee, Sangchul; Yen, Haw; Rabenhorst, Martin C.; Jeong, Jaehak; Yeo, In-Young

    2016-10-01

    Process based, distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated, in most cases through matching modeled in-stream fluxes with monitored data. Recently, concern has been raised regarding the reliability of this common calibration practice, because models that are deemed to be adequately calibrated based on commonly used metrics (e.g., Nash Sutcliffe efficiency) may not realistically represent intra-watershed responses or fluxes. Such shortcomings stem from the use of an evaluation criteria that only concerns the global in-stream responses of the model without investigating intra-watershed responses. In this study, we introduce a modification to the Soil and Water Assessment Tool (SWAT) model, and a new calibration technique that collectively reduce the chance of misrepresenting intra-watershed responses. The SWAT model was modified to better represent NO3 cycling in soils with various degrees of water holding capacity. The new calibration tool has the capacity to calibrate paired watersheds simultaneously within a single framework. It was found that when both proposed methodologies were applied jointly to two paired watersheds on the Delmarva Peninsula, the performance of the models as judged based on conventional metrics suffered, however, the intra-watershed responses (e.g., mass of NO3 lost to denitrification) in the two models automatically converged to realistic sums. This approach also demonstrates the capacity to spatially distinguish areas of high denitrification potential, an ability that has implications for improved management of prior converted wetlands under crop production and for identifying prominent areas for wetland restoration.

  13. Constraining entropic cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Koivisto, Tomi S. [Institute for Theoretical Physics and the Spinoza Institute, Utrecht University, Leuvenlaan 4, Postbus 80.195, 3508 TD Utrecht (Netherlands); Mota, David F. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zumalacárregui, Miguel, E-mail: t.s.koivisto@uu.nl, E-mail: d.f.mota@astro.uio.no, E-mail: miguelzuma@icc.ub.edu [Institute of Cosmos Sciences (ICC-IEEC), University of Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)

    2011-02-01

    It has been recently proposed that the interpretation of gravity as an emergent, entropic phenomenon might have nontrivial implications to cosmology. Here several such approaches are investigated and the underlying assumptions that must be made in order to constrain them by the BBN, SneIa, BAO and CMB data are clarified. Present models of inflation or dark energy are ruled out by the data. Constraints are derived on phenomenological parameterizations of modified Friedmann equations and some features of entropic scenarios regarding the growth of perturbations, the no-go theorem for entropic inflation and the possible violation of the Bekenstein bound for the entropy of the Universe are discussed and clarified.

  14. Constraining U.S. ammonia emissions using TES remote sensing observations and the GEOS-Chem adjoint model

    Science.gov (United States)

    Ammonia (NH(3)has significant impacts on biodiversity, eutrophication, and acidification. Widespread uncertainty in the magnitude and seasonality of NH3 emissions hinders efforts to address these issues. In this work, we constrain U.S. NH3 sources using obse...

  15. The Derivation of Fault Volumetric Properties from 3D Trace Maps Using Outcrop Constrained Discrete Fracture Network Models

    Science.gov (United States)

    Hodgetts, David; Seers, Thomas

    2015-04-01

    -deterministic, outcrop constrained discrete fracture network modeling code to derive volumetric fault intensity measures (fault area per unit volume / fault volume per unit volume). Producing per-vertex measures of volumetric intensity; our method captures the spatial variability in 3D fault density across a surveyed outcrop, enabling first order controls to be probed. We demonstrate our approach on pervasively faulted exposures of a Permian aged reservoir analogue from the Vale of Eden Basin, UK.

  16. Geothermal Conceptual Model in Earthquake Swarm Area: Constrains from Physical Properties of Supercritical Fluids and Dissipative Theory

    Science.gov (United States)

    Wang, S. C.; Lee, C. S.

    2016-12-01

    In recent five years, geothermal energy became one of the most prosperous renewable energy in the world, but produces only 0.5% of the global electricity. Why this great potential of green energy cannot replace the fuel and nuclear energy? The necessity of complicated exploration procedures and precious experts in geothermal field is similar to that of the oil and gas industry. The Yilan Plain (NE Taiwan) is one of the hot area for geothermal development and research in the second phase of National Energy Program (NEP-II). The geological and geophysical studies of the area indicate that the Yilan Plain is an extension of the Okinawa Trough back arc rifting which provide the geothermal resource. Based on the new constrains from properties of supercritical fluids and dissipative structure theory, the geophysical evidence give confident clues on how the geothermal system evolved at depth. The geothermal conceptual model in NEP-II indicates that the volcanic intrusion under the complicate fault system is possibly beneath the Yilan Plain. However, the bottom temperature of first deep drilling and geochemical evidence in NEP-II imply no volcanic intrusion. In contrast, our results show that seismic activities in geothermal field observed self-organization, and are consistent with the brittle-ductile / brittle-plastic transition, which indicates that supercritical fluids triggered earthquake swarms. The geothermal gradient and geochemical anomalies in Yilan Plain indicate an open system far from equilibrium. Mantle and crust exchange energy and materials through supercritical fluids to generate a dissipative structure in geothermal fields and promote water-rock interactions and fractures. Our initial studies have suggested a dissipative structure of geothermal system that could be identified by geochemical and geophysical data. The key factor is the tectonic setting that triggered supercritical fluids upwelling from deep (possibly from the mantle or the upper crust). Our

  17. Modeling electron density distributions from X-ray diffraction to derive optical properties: Constrained wavefunction versus multipole refinement

    Science.gov (United States)

    Hickstein, Daniel D.; Cole, Jacqueline M.; Turner, Michael J.; Jayatilaka, Dylan

    2013-08-01

    The rational design of next-generation optical materials requires an understanding of the connection between molecular structure and the solid-state optical properties of a material. A fundamental challenge is to utilize the accurate structural information provided by X-ray diffraction to explain the properties of a crystal. For years, the multipole refinement has been the workhorse technique for transforming high-resolution X-ray diffraction datasets into the detailed electron density distribution of crystalline material. However, the electron density alone is not sufficient for a reliable calculation of the nonlinear optical properties of a material. Recently, the X-ray constrained wavefunction refinement has emerged as a viable alternative to the multipole refinement, offering several potential advantages, including the calculation of a wide range of physical properties and seeding the refinement process with a physically reasonable starting point. In this study, we apply both the multipole refinement and the X-ray constrained wavefunction technique to four molecules with promising nonlinear optical properties and diverse structural motifs. In general, both techniques obtain comparable figures of merit and generate largely similar electron densities, demonstrating the wide applicability of the X-ray constrained wavefunction method. However, there are some systematic differences between the electron densities generated by each technique. Importantly, we find that the electron density generated using the X-ray constrained wavefunction method is dependent on the exact location of the nuclei. The X-ray constrained wavefunction refinement makes smaller changes to the wavefunction when coordinates from the Hartree-Fock-based Hirshfeld atom refinement are employed rather than coordinates from the multipole refinement, suggesting that coordinates from the Hirshfeld atom refinement allow the X-ray constrained wavefunction method to produce more accurate wavefunctions. We

  18. AQM router design for TCP network via input constrained fuzzy control of time-delay affine Takagi-Sugeno fuzzy models

    Science.gov (United States)

    Chang, Wen-Jer; Meng, Yu-Teh; Tsai, Kuo-Hui

    2012-12-01

    In this article, Takagi-Sugeno (T-S) fuzzy control theory is proposed as a key tool to design an effective active queue management (AQM) router for the transmission control protocol (TCP) networks. The probability control of packet marking in the TCP networks is characterised by an input constrained control problem in this article. By modelling the TCP network into a time-delay affine T-S fuzzy model, an input constrained fuzzy control methodology is developed in this article to serve the AQM router design. The proposed fuzzy control approach, which is developed based on the parallel distributed compensation technique, can provide smaller probability of dropping packets than previous AQM design schemes. Lastly, a numerical simulation is provided to illustrate the usefulness and effectiveness of the proposed design approach.

  19. Constraining a variable dark energy model from the redshift-luminosity distance relations of gamma-ray bursts and type Ia supernovae

    CERN Document Server

    Ichimasa, R; Hashimoto, M

    2016-01-01

    There are many kinds of models which describe the dynamics of dark energy (DE). Among all we adopt an equation of state (EoS) which varies as a function of time. We adopt Markov Chain Monte Carlo method to constrain the five parameters of our models. As a consequence, we can show the characteristic behavior of DE during the evolution of the universe. We constrain the EoS of DE with use of the avairable data of gamma-ray bursts and type Ia supernovae (SNe Ia) concerning the redshift-luminosity distance relations. As a result, we find that DE is quintessence-like in the early time and phantom-like in the present epoch or near the future, where the change occurs rather rapidly at $z\\sim0.3$.

  20. Space Constrained Dynamic Covering

    CERN Document Server

    Antonellis, Ioannis; Dughmi, Shaddin

    2009-01-01

    In this paper, we identify a fundamental algorithmic problem that we term space-constrained dynamic covering (SCDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SCDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X={1,2,...,n} and I={S_1, ..., S_m}, S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SCDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), sometimes even as little as O((m+n)polylog(mn)) space. The goal of SCDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We present algorithms a...

  1. A phenomenological memristor model for short-term/long-term memory

    Science.gov (United States)

    Chen, Ling; Li, Chuandong; Huang, Tingwen; Ahmad, Hafiz Gulfam; Chen, Yiran

    2014-08-01

    Memristor is considered to be a natural electrical synapse because of its distinct memory property and nanoscale. In recent years, more and more similar behaviors are observed between memristors and biological synapse, e.g., short-term memory (STM) and long-term memory (LTM). The traditional mathematical models are unable to capture the new emerging behaviors. In this article, an updated phenomenological model based on the model of the Hewlett-Packard (HP) Labs has been proposed to capture such new behaviors. The new dynamical memristor model with an improved ion diffusion term can emulate the synapse behavior with forgetting effect, and exhibit the transformation between the STM and the LTM. Further, this model can be used in building new type of neural networks with forgetting ability like biological systems, and it is verified by our experiment with Hopfield neural network.

  2. Using archaeomagnetic field models to constrain the physics of the core: robustness and preferred locations of reversed flux patches

    Science.gov (United States)

    Terra-Nova, Filipe; Amit, Hagay; Hartmann, Gelvam A.; Trindade, Ricardo I. F.

    2016-09-01

    Archaeomagnetic field models cover longer timescales than historical models and may therefore resolve the motion of geomagnetic features on the core-mantle boundary (CMB) in a more meaningful statistical sense. Here we perform a detailed appraisal of archaeomagnetic field models to infer some aspects of the physics of the outer core. We characterize and compare the identification and tracking of reversed flux patches (RFPs) in order to assess the RFPs robustness. We find similar behaviour within a family of models but differences among different families, suggesting that modelling strategy is more influential than data set. Similarities involve recurrent positions of RFPs, but no preferred direction of motion is found. The tracking of normal flux patches shows similar qualitative behaviour confirming that RFPs identification and tracking is not strongly biased by their relative weakness. We also compare the tracking of RFPs with that of the historical field model gufm1 and with seismic anomalies of the lowermost mantle to explore the possibility that RFPs have preferred locations prescribed by lower mantle lateral heterogeneity. The archaeomagnetic field model that most resembles the historical field is interpreted in terms of core dynamics and core-mantle thermal interactions. This model exhibits correlation between RFPs and low seismic shear velocity in co-latitude and a shift in longitude. These results shed light on core processes, in particular we infer toroidal field lines with azimuthal orientation below the CMB and large fluid upwelling structures with a width of about 80° (Africa) and 110° (Pacific) at the top of the core. Finally, similar preferred locations of RFPs in the past 9 and 3 kyr of the same archaeomagnetic field model suggest that a 3 kyr period is sufficiently long to reliably detect mantle control on core dynamics. This allows estimating an upper bound of 220-310 km for the magnetic boundary layer thickness below the CMB.

  3. Modeling and Event-Driven Simulation of Coordinated Multi-Point in LTE-Advanced with Constrained Backhaul

    DEFF Research Database (Denmark)

    Artuso, Matteo; Christiansen, Henrik Lehrmann

    2014-01-01

    Inter-cell interference (ICI) is considered as the most critical bottleneck to ubiquitous 4th generation cellular access in the mobile long term evolution (LTE). To address the problem, several solutions are under evaluation as part of LTE-Advanced (LTE-A), the most promising one being coordinated...... multi-point joint transmission (CoMP JT). Field tests are generally considered impractical and costly for CoMP JT, therefore the need to provide a comprehensive and high-fidelity computer model to understand the impact of different design attributes and the applicability use cases. This paper presents...

  4. A note on constrained M-estimation and its recursive analog in multivariate linear regression models Dedicated to Professor Zhidong Bai on the occasion of his 65th birthday

    Institute of Scientific and Technical Information of China (English)

    RAO Calyampudi R; WU YueHua

    2009-01-01

    In this paper, the constrained M-estimation of the regression coefficients and scatter parameters in a general multivariate linear regression model is considered. Since the constrained Mestimation is not easy to compute, an up-dating recursion procedure is proposed to simplify the computation of the estimators when a new observation is obtained. We show that, under mild conditions,the recursion estimates are strongly consistent. In addition, the asymptotic normality of the recursive constrained M-estimators of regression coefficients is established. A Monte Carlo simulation study of the recursion estimates is also provided. Besides, robustness and asymptotic behavior of constrained M-estimators are briefly discussed.

  5. Modelling the flooding capacity of a Polish Carpathian river: A comparison of constrained and free channel conditions

    Science.gov (United States)

    Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna

    2016-11-01

    The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep

  6. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios.

    Science.gov (United States)

    Yen, Haw; White, Michael J; Arnold, Jeffrey G; Keitzer, S Conor; Johnson, Mari-Vaughn V; Atwood, Jay D; Daggupati, Prasad; Herbert, Matthew E; Sowa, Scott P; Ludsin, Stuart A; Robertson, Dale M; Srinivasan, Raghavan; Rewa, Charles A

    2016-11-01

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios

    Science.gov (United States)

    Yen, Haw; White, Michael J.; Arnold, Jeffrey G.; Keitzer, S. Conor; Johnson, Mari-Vaughn V; Atwood, Jay D.; Daggupati, Prasad; Herbert, Matthew E.; Sowa, Scott P.; Ludsin, Stuart A.; Robertson, Dale; Srinivasan, Raghavan; Rewa, Charles A.

    2016-01-01

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT2012) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation.

  8. A province-scale block model of Walker Lane and western Basin and Range crustal deformation constrained by GPS observations (Invited)

    Science.gov (United States)

    Hammond, W. C.; Bormann, J.; Blewitt, G.; Kreemer, C.

    2013-12-01

    improves our ability to compare results to geologic fault slip rates. Modeling the kinematics on this scale has the advantages of 1) reducing the impact of poorly constrained boundaries on small geographically limited models, 2) consistent modeling of rotations across major structural step-overs near the Mina deflection and Carson domain, 3) tracking the kinematics of the south-to-north varying budget of Walker Lane deformation by solving for extension in the Basin and Range to the east, and 4) using a contiguous SNGV as a uniform western kinematic boundary condition. We compare contemporary deformation to geologic slip rates and longer term rotation rates estimated from rock paleomagnetism. GPS-derived block rotation rates are somewhat dependent on model regularization, but are generally within 1° per million years, and tend to be slower than published paleomagnetic rotations rates. GPS data, together with neotectonic and rock paleomagnetism studies provide evidence that the relative importance of Walker Lane block rotations and fault slip continues to evolve, giving way to a more through-going system with slower rotation rates and higher slip rates on individual faults.

  9. A model of competition between employed, short-term and long-term unemployed job searchers

    NARCIS (Netherlands)

    Broersma, Lourens

    1995-01-01

    This paper presents a model in which not only employed job search is endogenized, but also the phenomenon that long-term unemployed may becomediscouraged and stop searching for a job. When this model is applied to Dutch flow data, we find that this discouragement particularly took place in the early

  10. Boolean Queries and Term Dependencies in Probabilistic Retrieval Models.

    Science.gov (United States)

    Croft, W. Bruce

    1986-01-01

    Proposes approach to integrating Boolean and statistical systems where Boolean queries are interpreted as a means of specifying term dependencies in relevant set of documents. Highlights include series of retrieval experiments designed to test retrieval strategy based on term dependence model and relation of results to other work. (18 references)…

  11. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...

  12. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...

  13. Ultra long-term simulation by the integrated model. 1. Framework and energy system module; Togo model ni yoru tanchoki simulation. 1. Flame work to energy system module

    Energy Technology Data Exchange (ETDEWEB)

    Kurosawa, A.; Yagita, H.; Yanagisawa, Y. [Research Inst. of Innovative Technology for the Earth, Kyoto (Japan)

    1997-01-30

    This paper introduces the study on the ultra long-term energy model `GRAPE` with considering global environment and the results of trial calculation. The GRAPE model is to consist of modules of energy system, climate change, change of land use, food demand/supply, macro economy, and environmental impact. This is a model that divides the world into ten regions, gives 1990 as a base year, and enables the ultra long-term simulation. In this time, emission of carbon is calculated as a trial. In the case of constrained quantity of carbon emission, energy supply in the latter half of 21st century is to compose photovoltaic energy, methanol from coal gasification, and biomass energy. In addition, the shear of nuclear energy is to remarkably increase. For the constitution of power generation, IGCC power generation with carbon recovery, wind power generation, photovoltaic power generation, and nuclear power generation are to extend their shears. In the case of constrained concentration of carbon emission, structural change of power generation option is to be delayed compared with the case of constrained quantity of carbon emission. 6 refs., 4 figs.

  14. Ultra long-term simulation by the integrated model. 1. Framework and energy system module; Togo model ni yoru tanchoki simulation. 1. Flame work to energy system module

    Energy Technology Data Exchange (ETDEWEB)

    Kurosawa, A.; Yagita, H.; Yanagisawa, Y. [Research Inst. of Innovative Technology for the Earth, Kyoto (Japan)

    1997-01-30

    This paper introduces the study on the ultra long-term energy model `GRAPE` with considering global environment and the results of trial calculation. The GRAPE model is to consist of modules of energy system, climate change, change of land use, food demand/supply, macro economy, and environmental impact. This is a model that divides the world into ten regions, gives 1990 as a base year, and enables the ultra long-term simulation. In this time, emission of carbon is calculated as a trial. In the case of constrained quantity of carbon emission, energy supply in the latter half of 21st century is to compose photovoltaic energy, methanol from coal gasification, and biomass energy. In addition, the shear of nuclear energy is to remarkably increase. For the constitution of power generation, IGCC power generation with carbon recovery, wind power generation, photovoltaic power generation, and nuclear power generation are to extend their shears. In the case of constrained concentration of carbon emission, structural change of power generation option is to be delayed compared with the case of constrained quantity of carbon emission. 6 refs., 4 figs.

  15. Long-term observations of black carbon mass concentrations at Fukue Island, western Japan, during 2009-2015: constraining wet removal rates and emission strengths from East Asia

    Science.gov (United States)

    Kanaya, Yugo; Pan, Xiaole; Miyakawa, Takuma; Komazaki, Yuichi; Taketani, Fumikazu; Uno, Itsushi; Kondo, Yutaka

    2016-08-01

    fitted reasonably well by a stretched exponential decay curve against APT; a single set of fitting parameters was sufficient to represent the results for air masses originating from different areas. An accumulated precipitation of 25.5 ± 6.1 mm reduced the TE to 1/e. BC-containing particles traveling to Fukue must have already been converted from hydrophobic to hydrophilic particles, because the behavior of TE against APT was similar to that of PM2.5, the major components of which are hydrophilic. Wet loss of BC greatly influenced interannual variations in the ΔBC / ΔCO ratios and BC mass concentrations. This long-term data set will provide a benchmark for testing chemical transport/climate model simulations covering East Asia.

  16. Constraining the process-based land surface model ORCHIDEE by nutrient enrichment and forest management experiments in Sweden

    Science.gov (United States)

    Sofie Lansø, Anne; Resovsky, Alex; Guenet, Bertrand; Peylin, Philippe; Vuichard, Nicolas; Messina, Palmira; Smith, Benjamin; Ryder, James; Naudts, Kim; Chen, Yiying; Otto, Juliane; McGrath, Matthew; Valade, Aude; Luyssaert, Sebastiaan

    2017-04-01

    Understanding the coupling between carbon (C) and nitrogen (N) cycling in terrestrial ecosystems is key to predicting global change. While numerous experimental studies have demonstrated the positive response of stand-level photosynthesis and net primary production (NPP) to atmospheric CO2 enrichment, N availability has been shown to exert an important control on the timing and magnitude of such responses. Forest management is also a key driver of C storage in such ecosystems but interactions between forest management and the N cycle as a C storage driver are not well known. In this study, we use data from N-fertilization experiments at two long-term forest manipulation sites in Sweden to inform and improve the representation of C and N interaction in the ORCHIDEE land surface model. Our version of the model represents the union of two ORCHIDEE branches; 1) ORCHIDEE-CN, which resolves processes related to terrestrial C and N cycling, and 2) ORCHIDEE-CAN, which integrates a multi-layer canopy structure and includes representation of forest management practices. Using this new model branch, referred to as ORCHIDEE-CN-CAN, we simulate the growth patterns of managed forests both with and without N limitations. Combining our simulated results with measurements of various ecosystem parameters (such as soil N) will aid in ecosystem model development, reducing structural uncertainty and optimizing parameter settings in global change simulations.

  17. Magnetometer Data Tests Models for the Origin of the Martian Crustal Dichotomy; Dichotomy Models Constrain Timing of Martian Magnetic Field

    Science.gov (United States)

    Gilmore, M. S.

    1999-01-01

    Measurements recently supplied by the MGS Magnetometer/Electron Reflectometer (MAG/ER) on MGS can be applied to test theories of the origin of the martian crustal dichotomy. Strong (+/- 1500 nT) magnetic anomalies are observed in the Martian crust. The observations can be summarized as follows: 1) strong crustal magnetic sources are generally confined to the southern highlands, although weaker (approx. 40 nT) anomalies were observed during close periapsis; 2) strong magnetic anomalies are absent in the vicinity of Hellas and Argyre; 3) the anomalies in the region 0 deg to 90 deg S, 120 deg to 240 deg west have a linear geometry, strike generally east west for 1000s km, and show several reversals. This latter point has led to the suggestion that some form of lateral plate tectonics may have been operative in the southern highlands of Mars. These observations have led previous workers to hypothesize that the magnetic anomalies were present prior to and were destroyed by the formation of Hellas and Argyre. As such large impacts are confined to the era of heavy bombardment, this places the time of formation of large magnetic anomalies prior to approx. 3.9 Ga. One obvious extension of this is that the northern lowlands lack significant anomalies because they were erased by impacts and/or the northern lowlands represent crust completely reheated above the Curie temperature. Preliminary observations of the distributions of the large crustal magnetic anomalies show that many of them extend continuously over the highland lowland boundary. This occurs particularly north of the boundary between 30 deg W and 270 deg W, corresponding to northern Arabia, but also occurs in southern Elysium (approx. 10 deg S, 200 deg) and the SW portion of Tharsis (approx. 15 deg S, 140 deg). This suggests that, in these areas, Noachian crust containing the greater than 3.9 Ga magnetic signature, lies beneath the northern highlands. This geometry can be used to test models for the formation of

  18. Evaluation of HOx sources and cycling using measurement-constrained model calculations in a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated ecosystem

    Directory of Open Access Journals (Sweden)

    S. B. Henry

    2013-02-01

    Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2

  19. Development of Solar Wind Model Driven by Empirical Heat Flux and Pressure Terms

    Science.gov (United States)

    Sittler, Edward C., Jr.; Ofman, L.; Selwa, M.; Kramar, M.

    2008-01-01

    We are developing a time stationary self-consistent 2D MHD model of the solar corona and solar wind as suggested by Sittler et al. (2003). Sittler & Guhathakurta (1999) developed a semiempirical steady state model (SG model) of the solar wind in a multipole 3-streamer structure, with the model constrained by Skylab observations. Guhathakurta et al. (2006) presented a more recent version of their initial work. Sittler et al. (2003) modified the SG model by investigating time dependent MHD, ad hoc heating term with heat conduction and empirical heating solutions. Next step of development of 2D MHD models was performed by Sittler & Ofman (2006). They derived effective temperature and effective heat flux from the data-driven SG model and fit smooth analytical functions to be used in MHD calculations. Improvements of the Sittler & Ofman (2006) results now show a convergence of the 3-streamer topology into a single equatorial streamer at altitudes > 2 R(sub S). This is a new result and shows we are now able to reproduce observations of an equatorially confined streamer belt. In order to allow our solutions to be applied to more general applications, we extend that model by using magnetogram data and PFSS model as a boundary condition. Initial results were presented by Selwa et al. (2008). We choose solar minimum magnetogram data since during solar maximum the boundary conditions are more complex and the coronal magnetic field may not be described correctly by PFSS model. As the first step we studied the simplest 2D MHD case with variable heat conduction, and with empirical heat input combined with empirical momentum addition for the fast solar wind. We use realistic magnetic field data based on NSO/GONG data, and plan to extend the study to 3D. This study represents the first attempt of fully self-consistent realistic model based on real data and including semi-empirical heat flux and semi-empirical effective pressure terms.

  20. Tectonic history of continental crustal wedge constrained by EBSD measurements of garnet inclusion trails and thermodynamic modeling

    Science.gov (United States)

    Skrzypek, E.; Schulmann, K.; Lexa, O.; Haloda, J.

    2009-04-01

    Inclusion trails in garnets represent an important but underused tool of structural geology to examine non-coaxial or polyphase coaxial deformation histories of orogens. Garnet growth with respect to deformation during prograde and retrograde orogenic evolution of a continental crustal wedge was constrained by EBSD measurements of internal garnet fabrics and petrological record from mid-crustal rocks of the Śnieżnik Massif (Western Sudetes). Textural position of metamorphic minerals and thermodynamic modeling document three main stages in the tectonic evolution. Few garnet cores show prograde MnO zoning and growth coeval with the formation of the earliest metamorphic foliation which is only rarely observed in the field. The major garnet growth occurs synchronously with the second steep S2 fabric under still prograde conditions as shown by garnet zoning and appearance of staurolite and kyanite (peak at 6,5kbar/600°C). Oppositely, garnet retrogression associated to the development of sillimanite and later andalusite indicates pressure decrease of ca. 3 kbar for the late flat and pervasive S3 fabric associated with macroscopic recumbent folding of steep S2 foliation. Electron back-scatter diffraction measurements on ilmenites platelets included in garnets help determining their crystallographic preferred orientation. Ilmenites a[100] axes define planar structures that are interpreted as included foliations. Consequently, microscopic observations and foliation intersection axes (FIA) allow to distinguish between two different records. Only few (prograde) garnet cores yield information on the orientation of the presumed first metamorphic fabric whereas most of the internal garnet foliations are straight, steep and correspond to relics of originally steep S2 fabric. Importantly, this steep attitude of internal garnet foliations is persistent in both F3 fold hinge and limb zones as well as in zones of complete transposition of S2 into flat S3. Therefore, these

  1. 基于流形的约束局部模型拟合%Fitting of constrained local model based on manifold

    Institute of Scientific and Technical Information of China (English)

    刘大琨; 谭晓阳

    2016-01-01

    为了将人脸形状向量集的流形结构嵌入到人脸定位模型中,以人脸定位参数模型中典型的约束局部模型为基础展开研究。结合局部坐标编码理论和稀疏约束,将基于流形结构的邻近形状向量集代替点分布模型中的非刚性形变集,实现了流形学习中的局部切空间排列方法与点分布模型的融合,从而得到了一个流形嵌入的约束局部模型。经过在模拟数据、有标注的人脸部件数据库和有标注的人脸数据库的试验验证,与基于线性重建的局部约束模型拟合相比,嵌入流形结构的约束局部模型具有更好的性能。%For the sake of embedding the manifold of face shape vectors into the models of face alignment efficiently, the research was carried out based on the typical constraint local model in the face location parameter model.According to the theorem of local coordinate coding and sparse constrain,the non-rigid deformations were replaced by the adjacent facial shape vectors which were based on manifold of facial shapes.The local tangent space alignment in manifold learn-ing was mixed with the point distribution model,and a manifold embedded constrained local model was derived.The experimental verification on the toyed dataset and two public facial databases (i.e.labeled face parts in the wild and la-beled face in the wild)were fulfilled.Compared with linear reconstruction based on constrained local model method fit-ting,the manifold embedded constrained local model method had better accuracy.

  2. A Team Building Model for Software Engineering Courses Term Projects

    Science.gov (United States)

    Sahin, Yasar Guneri

    2011-01-01

    This paper proposes a new model for team building, which enables teachers to build coherent teams rapidly and fairly for the term projects of software engineering courses. Moreover, the model can also be used to build teams for any type of project, if the team member candidates are students, or if they are inexperienced on a certain subject. The…

  3. A Team Building Model for Software Engineering Courses Term Projects

    Science.gov (United States)

    Sahin, Yasar Guneri

    2011-01-01

    This paper proposes a new model for team building, which enables teachers to build coherent teams rapidly and fairly for the term projects of software engineering courses. Moreover, the model can also be used to build teams for any type of project, if the team member candidates are students, or if they are inexperienced on a certain subject. The…

  4. Exploring Term Dependences in Probabilistic Information Retrieval Model.

    Science.gov (United States)

    Cho, Bong-Hyun; Lee, Changki; Lee, Gary Geunbae

    2003-01-01

    Describes a theoretic process to apply Bahadur-Lazarsfeld expansion (BLE) to general probabilistic models and the state-of-the-art 2-Poisson model. Through experiments on two standard document collections, one in Korean and one in English, it is demonstrated that incorporation of term dependences using BLE significantly contributes to performance…

  5. A Polynomial Term Structure Model with Macroeconomic Variables

    Directory of Open Access Journals (Sweden)

    José Valentim Vicente

    2007-06-01

    Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.

  6. Trajectory piecewise quadratic reduced-order model for subsurface flow, with application to PDE-constrained optimization

    Science.gov (United States)

    Trehan, Sumeet; Durlofsky, Louis J.

    2016-12-01

    A new reduced-order model based on trajectory piecewise quadratic (TPWQ) approximations and proper orthogonal decomposition (POD) is introduced and applied for subsurface oil-water flow simulation. The method extends existing techniques based on trajectory piecewise linear (TPWL) approximations by incorporating second-derivative terms into the reduced-order treatment. Both the linear and quadratic reduced-order methods, referred to as POD-TPWL and POD-TPWQ, entail the representation of new solutions as expansions around previously simulated high-fidelity (full-order) training solutions, along with POD-based projection into a low-dimensional space. POD-TPWQ entails significantly more offline preprocessing than POD-TPWL as it requires generating and projecting several third-order (Hessian-type) terms. The POD-TPWQ method is implemented for two-dimensional systems. Extensive numerical results demonstrate that it provides consistently better accuracy than POD-TPWL, with speedups of about two orders of magnitude relative to high-fidelity simulations for the problems considered. We demonstrate that POD-TPWQ can be used as an error estimator for POD-TPWL, which motivates the development of a trust-region-based optimization framework. This procedure uses POD-TPWL for fast function evaluations and a POD-TPWQ error estimator to determine when retraining, which entails a high-fidelity simulation, is required. Optimization results for an oil-water problem demonstrate the substantial speedups that can be achieved relative to optimizations based on high-fidelity simulation.

  7. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    Energy Technology Data Exchange (ETDEWEB)

    Wels, Michael; Hornegger, Joachim [Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen (Germany); Zheng Yefeng; Comaniciu, Dorin [Corporate Research and Technologies, Siemens Corporate Technology, 755 College Road East, Princeton, NJ 08540 (United States); Huber, Martin, E-mail: michael.wels@informatik.uni-erlangen.de [Corporate Research and Technologies, Siemens Corporate Technology, Guenther-Scharowsky-Str. 1, 91058 Erlangen (Germany)

    2011-06-07

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  8. Towards better-constrained assessments of the carbon balance of North America in the 21st Century: a comparison of recent model and inventory-based estimates

    Science.gov (United States)

    Hayes, D. J.; McGuire, D.; Post, W. M.; Heath, L. S.; Kurz, W.; Stinson, G.; Thornton, M.; Wei, Y.; West, T. O.

    2009-12-01

    The North American C sink is generally considered to account for a large, but highly uncertain, portion of the northern extra-tropical land based sink, with estimates ranging from 15% to 100%. This uncertainty is owing to a number of sources, including the limitations of the methodologies used to develop estimates of C stocks and flux, the lack of comprehensive and accurate data on key driving forces (particularly disturbance, land management and land-use change), and the incomplete knowledge of long-term ecosystem responses to these driving forces and their interactions. Here, we examine the ability of various modeling approaches to identify sources and sinks of carbon across the North American continent by comparing model estimates with those based on analysis of available national forest and agricultural inventories for Canada, the U.S. and Mexico. For North America, inventory-based estimates of C stocks and flux in the early 21st Century (2000 - 2006) are being collected by either political state units in the case of the United States and Mexico, or the Kyoto Protocol reporting units for Canada. Flux estimates from more than 20 forward- and inverse- based models have been collected for the Regional / Continental Interim Synthesis activity under the North American Carbon Program, and these estimates have been processed to allow comparison at the spatial and temporal scales of the inventories. Preliminary analysis of the inventory data suggest that Canada’s Managed Forest Area acted as a net sink of atmospheric CO2 on the order of 46 TgC yr-1 from 2000 to 2006. This estimate includes the release of 26 TgC yr-1 from forest fires, while an additional 50 TgC yr-1 was removed from the forest as harvested products over this time period. In the U.S., inventory data indicate net C stock gains of 167 TgC yr-1 in the forest sector and 17 TgC yr-1 in croplands from 2000 to 2005. Model estimates of net ecosystem exchange (NEE) for the continent range from -78 to -645 Tg

  9. Modeling Maintenance of Long-Term Potentiation in Clustered Synapses: Long-Term Memory without Bistability

    Directory of Open Access Journals (Sweden)

    Paul Smolen

    2015-01-01

    Full Text Available Memories are stored, at least partly, as patterns of strong synapses. Given molecular turnover, how can synapses maintain strong for the years that memories can persist? Some models postulate that biochemical bistability maintains strong synapses. However, bistability should give a bimodal distribution of synaptic strength or weight, whereas current data show unimodal distributions for weights and for a correlated variable, dendritic spine volume. Thus it is important for models to simulate both unimodal distributions and long-term memory persistence. Here a model is developed that connects ongoing, competing processes of synaptic growth and weakening to stochastic processes of receptor insertion and removal in dendritic spines. The model simulates long-term (>1 yr persistence of groups of strong synapses. A unimodal weight distribution results. For stability of this distribution it proved essential to incorporate resource competition between synapses organized into small clusters. With competition, these clusters are stable for years. These simulations concur with recent data to support the “clustered plasticity hypothesis” which suggests clusters, rather than single synaptic contacts, may be a fundamental unit for storage of long-term memory. The model makes empirical predictions and may provide a framework to investigate mechanisms maintaining the balance between synaptic plasticity and stability of memory.

  10. Nonlinear Kalman Filtering in Affine Term Structure Models

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Dorion, Christian; Jacobs, Kris;

    When the relationship between security prices and state variables in dynamic term structure models is nonlinear, existing studies usually linearize this relationship because nonlinear fi…ltering is computationally demanding. We conduct an extensive investigation of this linearization and analyze...... Monte Carlo experiment demonstrates that the unscented Kalman fi…lter is much more accurate than its extended counterpart in fi…ltering the states and forecasting swap rates and caps. Our fi…ndings suggest that the unscented Kalman fi…lter may prove to be a good approach for a number of other problems...... in fi…xed income pricing with nonlinear relationships between the state vector and the observations, such as the estimation of term structure models using coupon bonds and the estimation of quadratic term structure models....

  11. Short-Termed Integrated Forecasting System: 1993 Model documentation report

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    The purpose of this report is to define the Short-Term Integrated Forecasting System (STIFS) and describe its basic properties. The Energy Information Administration (EIA) of the US Energy Department (DOE) developed the STIFS model to generate short-term (up to 8 quarters), monthly forecasts of US supplies, demands, imports exports, stocks, and prices of various forms of energy. The models that constitute STIFS generate forecasts for a wide range of possible scenarios, including the following ones done routinely on a quarterly basis: A base (mid) world oil price and medium economic growth. A low world oil price and high economic growth. A high world oil price and low economic growth. This report is written for persons who want to know how short-term energy markets forecasts are produced by EIA. The report is intended as a reference document for model analysts, users, and the public.

  12. Short-Termed Integrated Forecasting System: 1993 Model documentation report

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    The purpose of this report is to define the Short-Term Integrated Forecasting System (STIFS) and describe its basic properties. The Energy Information Administration (EIA) of the US Energy Department (DOE) developed the STIFS model to generate short-term (up to 8 quarters), monthly forecasts of US supplies, demands, imports exports, stocks, and prices of various forms of energy. The models that constitute STIFS generate forecasts for a wide range of possible scenarios, including the following ones done routinely on a quarterly basis: A base (mid) world oil price and medium economic growth. A low world oil price and high economic growth. A high world oil price and low economic growth. This report is written for persons who want to know how short-term energy markets forecasts are produced by EIA. The report is intended as a reference document for model analysts, users, and the public.

  13. Improving and Testing Regional Attenuation and Spreading Models Using Well-Constrained Source Terms, Multiple Methods and Datasets

    Science.gov (United States)

    2013-07-03

    and ABKT for event 13117 (right), using Q corrections from fitting source-corrected spectra (labeled MDF ), and from tomography by LANL and LLNL (see...spectra (labeled MDF ), and from tomography by LANL and LLNL (see legends). 2.2. Data Quality Control Second, data quality directly impacts

  14. Aerosol optical depth assimilation for a size-resolved sectional model: impacts of observationally constrained, multi-wavelength and fine mode retrievals on regional scale analyses and forecasts

    Science.gov (United States)

    Saide, P. E.; Carmichael, G. R.; Liu, Z.; Schwartz, C. S.; Lin, H. C.; da Silva, A. M.; Hyer, E.

    2013-10-01

    An aerosol optical depth (AOD) three-dimensional variational data assimilation technique is developed for the Gridpoint Statistical Interpolation (GSI) system for which WRF-Chem forecasts are performed with a detailed sectional model, the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC). Within GSI, forward AOD and adjoint sensitivities are performed using Mie computations from the WRF-Chem optical properties module, providing consistency with the forecast. GSI tools such as recursive filters and weak constraints are used to provide correlation within aerosol size bins and upper and lower bounds for the optimization. The system is used to perform assimilation experiments with fine vertical structure and no data thinning or re-gridding on a 12 km horizontal grid over the region of California, USA, where improvements on analyses and forecasts is demonstrated. A first set of simulations was performed, comparing the assimilation impacts of using the operational MODIS (Moderate Resolution Imaging Spectroradiometer) dark target retrievals to those using observationally constrained ones, i.e., calibrated with AERONET (Aerosol RObotic NETwork) data. It was found that using the observationally constrained retrievals produced the best results when evaluated against ground based monitors, with the error in PM2.5 predictions reduced at over 90% of the stations and AOD errors reduced at 100% of the monitors, along with larger overall error reductions when grouping all sites. A second set of experiments reveals that the use of fine mode fraction AOD and ocean multi-wavelength retrievals can improve the representation of the aerosol size distribution, while assimilating only 550 nm AOD retrievals produces no or at times degraded impact. While assimilation of multi-wavelength AOD shows positive impacts on all analyses performed, future work is needed to generate observationally constrained multi-wavelength retrievals, which when assimilated will generate size

  15. Constraining a land-surface model with multiple observations by application of the MPI-Carbon Cycle Data Assimilation System V1.0

    Science.gov (United States)

    Schürmann, Gregor J.; Kaminski, Thomas; Köstler, Christoph; Carvalhais, Nuno; Voßbeck, Michael; Kattge, Jens; Giering, Ralf; Rödenbeck, Christian; Heimann, Martin; Zaehle, Sönke

    2016-09-01

    We describe the Max Planck Institute Carbon Cycle Data Assimilation System (MPI-CCDAS) built around the tangent-linear version of the JSBACH land-surface scheme, which is part of the MPI-Earth System Model v1. The simulated phenology and net land carbon balance were constrained by globally distributed observations of the fraction of absorbed photosynthetically active radiation (FAPAR, using the TIP-FAPAR product) and atmospheric CO2 at a global set of monitoring stations for the years 2005 to 2009. When constrained by FAPAR observations alone, the system successfully, and computationally efficiently, improved simulated growing-season average FAPAR, as well as its seasonality in the northern extra-tropics. When constrained by atmospheric CO2 observations alone, global net and gross carbon fluxes were improved, despite a tendency of the system to underestimate tropical productivity. Assimilating both data streams jointly allowed the MPI-CCDAS to match both observations (TIP-FAPAR and atmospheric CO2) equally well as the single data stream assimilation cases, thereby increasing the overall appropriateness of the simulated biosphere dynamics and underlying parameter values. Our study thus demonstrates the value of multiple-data-stream assimilation for the simulation of terrestrial biosphere dynamics. It further highlights the potential role of remote sensing data, here the TIP-FAPAR product, in stabilising the strongly underdetermined atmospheric inversion problem posed by atmospheric transport and CO2 observations alone. Notwithstanding these advances, the constraint of the observations on regional gross and net CO2 flux patterns on the MPI-CCDAS is limited through the coarse-scale parametrisation of the biosphere model. We expect improvement through a refined initialisation strategy and inclusion of further biosphere observations as constraints.

  16. Long-term modeling of alteration-transport coupling: Application to a fractured Roman glass

    Science.gov (United States)

    Verney-Carron, Aurélie; Gin, Stéphane; Frugier, Pierre; Libourel, Guy

    2010-04-01

    To improve confidence in glass alteration models, as used in nuclear and natural applications, their long-term predictive capacity has to be validated. For this purpose, we develop a new model that couples geochemical reactions with transport and use a fractured archaeological glass block that has been altered for 1800 years under well-constrained conditions in order to test the capacity of the model. The chemical model considers three steps in the alteration process: (1) formation of a hydrated glass by interdiffusion, whose kinetics are controlled by a pH and temperature dependent diffusion coefficient; (2) the dissolution of the hydrated glass, whose kinetics are based on an affinity law; (3) the precipitation of secondary phases if thermodynamic saturation is reached. All kinetic parameters were determined from experiments. The model was initially tested on alteration experiments in different solutions (pure water, Tris, seawater). It was then coupled with diffusive transport in solution to simulate alteration in cracks within the glass. Results of the simulations run over 1800 years are in good agreement with archaeological glass block observations concerning the nature of alteration products (hydrated glass, smectites, and carbonates) and crack alteration thicknesses. External cracks in direct contact with renewed seawater were altered at the forward dissolution rate and are filled with smectites (400-500 μm). Internal cracks are less altered (by 1 or 2 orders of magnitude) because of the strong coupling between alteration chemistry and transport. The initial crack aperture, the distance to the surface, and sealing by secondary phases account for these low alteration thicknesses. The agreement between simulations and observations thus validates the predictive capacity of this coupled geochemical model and increases more generally the robustness and confidence in glass alteration models to predict long-term behavior of nuclear waste in geological disposal or

  17. From global fits of neutrino data to constrained sequential dominance

    CERN Document Server

    Björkeroth, Fredrik

    2014-01-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We perform a global analysis on a class of CSD($n$) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  18. Selection of models to calculate the LLW source term

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  19. Warped Higgsless Models with IR-Brane Kinetic Terms

    CERN Document Server

    Davoudiasl, H; Lillie, Benjamin Huntington; Rizzo, T G

    2004-01-01

    We examine a warped Higgsless $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ model in 5--$d$ with IR(TeV)--brane kinetic terms. It is shown that adding a brane term for the $U(1)_{B-L}$ gauge field does not affect the scale ($\\sim 2-3$ TeV) where perturbative unitarity in $W_L^+ W_L^- \\to W_L^+ W_L^-$ is violated. This term could, however, enhance the agreement of the model with the precision electroweak data. In contrast, the inclusion of a kinetic term corresponding to the $SU(2)_D$ custodial symmetry of the theory delays the unitarity violation in $W_L^\\pm$ scattering to energy scales of $\\sim 6-7$ TeV for a significant fraction of the parameter space. This is about a factor of 4 improvement compared to the corresponding scale of unitarity violation in the Standard Model without a Higgs. We also show that null searches for extra gauge bosons at the Tevatron and for contact interactions at LEP II place non-trivial bounds on the size of the IR-brane terms.

  20. Early Cosmology Constrained

    CERN Document Server

    Verde, Licia; Pigozzo, Cassio; Heavens, Alan F; Jimenez, Raul

    2016-01-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the $\\Lambda$CDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95\\% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter $\\Omega_{\\rm MR} < 0.006$ and extra radiation parameterised as extra effective neutrino species $2.3 < N_{\\rm eff} < 3.2$ when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond $\\Lambda$CDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way ...

  1. The Starobinsky Model from Superconformal D-Term Inflation

    CERN Document Server

    Buchmuller, W; Kamada, K

    2013-01-01

    We point out that in the large field regime, the recently proposed superconformal D-term inflation model coincides with the Starobinsky model. In tis regime, the inflaton field dominates over the Planck mass in the gravitational kinetic term in the Jordan frame. Slow-roll inflation is realized in the large field regime for sufficiently large gauge couplings. The Starobinsky model generally emerges as an effective description of slow-roll inflation if a Jordan frame exists where, for large inflaton field values, the action is scale invariant, and the ratio $\\hat{\\lambda}$ of the inflaton self-coupling and the nonminimal coupling to gravity is tiny. The interpretation of this effective coupling is different in different models. In hybrid inflation it is determined by the scale of grand unification, $\\hat{\\lambda} \\sim (\\Lambda_{\\rm GUT}/\\Mp)^4$.

  2. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan;

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re......-search of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset....... The conferred results show that the prediction errors can be decreased, while the computation time is reduced....

  3. Multivariate Term Structure Models with Level and Heteroskedasticity Effects

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the ãth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate GA...... and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates. Udgivelsesdato: MAY...

  4. Bayesian evaluation of inequality constrained hypotheses

    NARCIS (Netherlands)

    Gu, X.; Mulder, J.; Deković, M.; Hoijtink, H.

    2014-01-01

    Bayesian evaluation of inequality constrained hypotheses enables researchers to investigate their expectations with respect to the structure among model parameters. This article proposes an approximate Bayes procedure that can be used for the selection of the best of a set of inequality constrained

  5. Risk factors and prognostic models for perinatal asphyxia at term

    NARCIS (Netherlands)

    Ensing, S.

    2015-01-01

    This thesis will focus on the risk factors and prognostic models for adverse perinatal outcome at term, with a special focus on perinatal asphyxia and obstetric interventions during labor to reduce adverse pregnancy outcomes. For the majority of the studies in this thesis we were allowed to use data

  6. A data-constrained model for compatibility check of remotely sensed basal melting with the hydrography in front of Antarctic ice shelves

    Science.gov (United States)

    Olbers, D.; Hellmer, H. H.; Buck, F. F. J. H.

    2014-02-01

    The ice shelf caverns around Antarctica are sources of cold and fresh water which contributes to the formation of Antarctic bottom water and thus to the ventilation of the deep basins of the World Ocean. While a realistic simulation of the cavern circulation requires high resolution, because of the complicated bottom topography and ice shelf morphology, the physics of melting and freezing at the ice shelf base is relatively simple. We have developed an analytically solvable box model of the cavern thermohaline state, using the formulation of melting and freezing as in Olbers and Hellmer (2010). There is high resolution along the cavern's path of the overturning circulation whereas the cross-path resolution is fairly coarse. The circulation in the cavern is prescribed and used as a tuning parameter to constrain the solution by attempting to match observed ranges for outflow temperature and salinity at the ice shelf front as well as of the mean basal melt rate. The method, tested for six Antarctic ice shelves, can be used for a quick estimate of melt/freeze rates and the overturning rate in particular caverns, given the temperature and salinity of the inflow and the above mentioned constrains for outflow and melting. In turn, the model can also be used for testing the compatibility of remotely sensed basal mass loss with observed cavern inflow characteristics.

  7. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    KAUST Repository

    Schull, M. A.

    2015-03-11

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  8. Microscopic dynamical description of proton-induced fission with the Constrained Molecular Dynamics (CoMD) Model

    CERN Document Server

    Vonta, N; Veselsky, M; Bonasera, A

    2015-01-01

    The microscopic description of nuclear fission still remains a topic of intense basic research. Un- derstanding nuclear fission, apart from a theoretical point of view, is of practical importance for energy production and the transmutation of nuclear waste. In nuclear astrophysics, fission sets the upper limit to the nucleosynthesis of heavy elements via the r-process. In this work we initiated a systematic study of intermediate energy proton-induced fission using the Constrained Molecu- lar Dynamics (CoMD) code. The CoMD code implements an effective interaction with a nuclear matter compressibility of K=200 (soft EOS) with several forms of the density dependence of the nucleon-nucleon symmetry potential. Moreover, a constraint is imposed in the phase-space occu- pation for each nucleon restoring the Pauli principle at each time step of the collision. A proper choice of the surface parameter of the effective interaction has been made to describe fission. In this work, we present results of fission calculation...

  9. A model for Long-term Industrial Energy Forecasting (LIEF)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M. [Lawrence Berkeley Lab., CA (United States)]|[Michigan Univ., Ann Arbor, MI (United States). Dept. of Physics]|[Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Hwang, R. [Lawrence Berkeley Lab., CA (United States)

    1992-02-01

    The purpose of this report is to establish the content and structural validity of the Long-term Industrial Energy Forecasting (LIEF) model, and to provide estimates for the model`s parameters. The model is intended to provide decision makers with a relatively simple, yet credible tool to forecast the impacts of policies which affect long-term energy demand in the manufacturing sector. Particular strengths of this model are its relative simplicity which facilitates both ease of use and understanding of results, and the inclusion of relevant causal relationships which provide useful policy handles. The modeling approach of LIEF is intermediate between top-down econometric modeling and bottom-up technology models. It relies on the following simple concept, that trends in aggregate energy demand are dependent upon the factors: (1) trends in total production; (2) sectoral or structural shift, that is, changes in the mix of industrial output from energy-intensive to energy non-intensive sectors; and (3) changes in real energy intensity due to technical change and energy-price effects as measured by the amount of energy used per unit of manufacturing output (KBtu per constant $ of output). The manufacturing sector is first disaggregated according to their historic output growth rates, energy intensities and recycling opportunities. Exogenous, macroeconomic forecasts of individual subsector growth rates and energy prices can then be combined with endogenous forecasts of real energy intensity trends to yield forecasts of overall energy demand. 75 refs.

  10. Constraining the strength of the terrestrial CO2 fertilization effect in the Canadian Earth system model version 4.2 (CanESM4.2)

    Science.gov (United States)

    Arora, Vivek K.; Scinocca, John F.

    2016-07-01

    Earth system models (ESMs) explicitly simulate the interactions between the physical climate system components and biogeochemical cycles. Physical and biogeochemical aspects of ESMs are routinely compared against their observation-based counterparts to assess model performance and to evaluate how this performance is affected by ongoing model development. Here, we assess the performance of version 4.2 of the Canadian Earth system model against four land carbon-cycle-focused, observation-based determinants of the global carbon cycle and the historical global carbon budget over the 1850-2005 period. Our objective is to constrain the strength of the terrestrial CO2 fertilization effect, which is known to be the most uncertain of all carbon-cycle feedbacks. The observation-based determinants include (1) globally averaged atmospheric CO2 concentration, (2) cumulative atmosphere-land CO2 flux, (3) atmosphere-land CO2 flux for the decades of 1960s, 1970s, 1980s, 1990s, and 2000s, and (4) the amplitude of the globally averaged annual CO2 cycle and its increase over the 1980 to 2005 period. The optimal simulation that satisfies constraints imposed by the first three determinants yields a net primary productivity (NPP) increase from ˜ 58 Pg C year-1 in 1850 to about ˜ 74 Pg C year-1 in 2005; an increase of ˜ 27 % over the 1850-2005 period. The simulated loss in the global soil carbon amount due to anthropogenic land use change (LUC) over the historical period is also broadly consistent with empirical estimates. Yet, it remains possible that these determinants of the global carbon cycle are insufficient to adequately constrain the historical carbon budget, and consequently the strength of terrestrial CO2 fertilization effect as it is represented in the model, given the large uncertainty associated with LUC emissions over the historical period.

  11. Modeling Wettability Variation during Long-Term Water Flooding

    Directory of Open Access Journals (Sweden)

    Renyi Cao

    2015-01-01

    Full Text Available Surface property of rock affects oil recovery during water flooding. Oil-wet polar substances adsorbed on the surface of the rock will gradually be desorbed during water flooding, and original reservoir wettability will change towards water-wet, and the change will reduce the residual oil saturation and improve the oil displacement efficiency. However there is a lack of an accurate description of wettability alternation model during long-term water flooding and it will lead to difficulties in history match and unreliable forecasts using reservoir simulators. This paper summarizes the mechanism of wettability variation and characterizes the adsorption of polar substance during long-term water flooding from injecting water or aquifer and relates the residual oil saturation and relative permeability to the polar substance adsorbed on clay and pore volumes of flooding water. A mathematical model is presented to simulate the long-term water flooding and the model is validated with experimental results. The simulation results of long-term water flooding are also discussed.

  12. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  13. A model for Long-term Industrial Energy Forecasting (LIEF)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M. (Lawrence Berkeley Lab., CA (United States) Michigan Univ., Ann Arbor, MI (United States). Dept. of Physics Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.); Hwang, R. (Lawrence Berkeley Lab., CA (United States))

    1992-02-01

    The purpose of this report is to establish the content and structural validity of the Long-term Industrial Energy Forecasting (LIEF) model, and to provide estimates for the model's parameters. The model is intended to provide decision makers with a relatively simple, yet credible tool to forecast the impacts of policies which affect long-term energy demand in the manufacturing sector. Particular strengths of this model are its relative simplicity which facilitates both ease of use and understanding of results, and the inclusion of relevant causal relationships which provide useful policy handles. The modeling approach of LIEF is intermediate between top-down econometric modeling and bottom-up technology models. It relies on the following simple concept, that trends in aggregate energy demand are dependent upon the factors: (1) trends in total production; (2) sectoral or structural shift, that is, changes in the mix of industrial output from energy-intensive to energy non-intensive sectors; and (3) changes in real energy intensity due to technical change and energy-price effects as measured by the amount of energy used per unit of manufacturing output (KBtu per constant $ of output). The manufacturing sector is first disaggregated according to their historic output growth rates, energy intensities and recycling opportunities. Exogenous, macroeconomic forecasts of individual subsector growth rates and energy prices can then be combined with endogenous forecasts of real energy intensity trends to yield forecasts of overall energy demand. 75 refs.

  14. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  15. Liquid-vapor phase relations in the Si-O system: A calorically constrained van der Waals-type model

    Science.gov (United States)

    Connolly, James A. D.

    2016-09-01

    This work explores the use of several van der Waals (vW)-type equations of state (EoS) for predicting vaporous phase relations and speciation in the Si-O system, with emphasis on the azeotropic boiling curve of SiO2-rich liquid. Comparison with the observed Rb and Hg boiling curves demonstrates that prediction accuracy is improved if the a-parameter of the EoS, which characterizes vW forces, is constrained by ambient pressure heat capacities. All EoS considered accurately reproduce metal boiling curve trajectories, but absent knowledge of the true critical compressibility factor, critical temperatures remain uncertain by ~500 K. The EoS plausibly represent the termination of the azeotropic boiling curve of silica-rich liquid by a critical point across which the dominant Si oxidation state changes abruptly from the tetravalent state characteristic of the liquid to the divalent state characteristic of the vapor. The azeotropic composition diverges from silica toward metal-rich compositions with increasing temperature. Consequently, silica boiling is divariant and atmospheric loss after a giant impact would enrich residual silicate liquids in reduced silicon. Two major sources of uncertainty in the boiling curve prediction are the heat capacity of silica liquid, which may decay during depolymerization from the near-Dulong-Petit limit heat capacity of the ionic liquid to value characteristic of the molecular liquid, and the unknown liquid affinity of silicon monoxide. Extremal scenarios for these uncertainties yield critical temperatures and compositions of 5200-6200 K and Si1.1O2-Si1.4O2. The lowest critical temperatures are marginally consistent with shock experiments and are therefore considered more probable.

  16. Experimentally constrained CA1 fast-firing parvalbumin-positive interneuron network models exhibit sharp transitions into coherent high frequency rhythms

    Directory of Open Access Journals (Sweden)

    Katie A Ferguson

    2013-10-01

    Full Text Available The coupling of high frequency oscillations (HFOs; >100 Hz and theta oscillations (3-12 Hz in the CA1 region of rats increases during REM sleep, indicating that it may play a role in memory processing. However, it is unclear whether the CA1 region itself is capable of providing major contributions to the generation of HFOs, or if they are strictly driven through input projections. Parvalbumin-positive (PV+ interneurons may play an essential role in these oscillations due to their extensive connections with neighbouring pyramidal cells, and their characteristic fast-spiking. Thus, we created mathematical network models to investigate the conditions under which networks of CA1 fast-spiking PV+ interneurons are capable of producing high frequency population rhythms.We used whole-cell patch clamp recordings of fast-spiking, PV+ cells in the CA1 region of an intact hippocampal preparation in vitro to derive cellular properties, from which we constrained an Izhikevich-type model. Novel, biologically constrained network models were constructed with these individual cell models, and we investigated networks across a range of experimentally determined excitatory inputs and inhibitory synaptic strengths. For each network, we determined network frequency and coherence.Network simulations produce coherent firing at high frequencies (> 90 Hz for parameter ranges in which PV-PV inhibitory synaptic conductances are necessarily small and external excitatory inputs are relatively large. Interestingly, our networks produce sharp transitions between random and coherent firing, and this sharpness is lost when connectivity is increased beyond biological estimates. Our work suggests that CA1 networks may be designed with mechanisms for quickly gating in and out of high frequency coherent population rhythms, which may be essential in the generation of nested theta/high frequency rhythms.

  17. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jing-Cheng [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Huang, Guohe, E-mail: huang@iseis.org [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Huang, Yuefei [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Zhang, Hua [College of Science and Engineering, Texas A& M University — Corpus Christi, Corpus Christi, TX 78412-5797 (United States); Li, Zhong [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Chen, Qiuwen [Center for Eco-Environmental Research, Nanjing Hydraulics Research Institute, Nanjing 210029 (China)

    2015-08-15

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  18. Optimising a two-echelon capacity-constrained material requirement manufacturing system using a linear programming model

    Directory of Open Access Journals (Sweden)

    Liliana Delgado Hidalgo

    2010-07-01

    Full Text Available A mixed integer linear programming model representing a two-echelon manufacturing system was implemented. Optimal deci- sions could be made about raw material/component provisioning by using the model. The model was programmed by using an algebraic modeller which was then integrated into a computational tool from which defining parameters could be managed as well as consulting the results once the model had been executed. The model was validated on a real manufacturing system; be- sides providing good representation of the system, optimal provisioning decisions were also reached. The article emphasises that such decisions cannot be made by using usual MRP reasoning.

  19. Multivariate Term Structure Models with Level and Heteroskedasticity Effects

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the ãth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate...... GARCH process (heteroskedasticity effects). The long-rate variance exhibits heteroskedasticity effects and level effects in accordance with the square-root model. The spread variance exhibits heteroskedasticity effects but no level effects. The level-GARCH model is preferred above the GARCH model...... and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates. Udgivelsesdato: MAY...

  20. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...