WorldWideScience

Sample records for constrained model predictive

  1. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  2. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  3. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  5. Adaptively Constrained Stochastic Model Predictive Control for the Optimal Dispatch of Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaogang Guo

    2018-01-01

    Full Text Available In this paper, an adaptively constrained stochastic model predictive control (MPC is proposed to achieve less-conservative coordination between energy storage units and uncertain renewable energy sources (RESs in a microgrid (MG. Besides the economic objective of MG operation, the limits of state-of-charge (SOC and discharging/charging power of the energy storage unit are formulated as chance constraints when accommodating uncertainties of RESs, considering mild violations of these constraints are allowed during long-term operation, and a closed-loop online update strategy is performed to adaptively tighten or relax constraints according to the actual deviation probability of violation level from the desired one as well as the current change rate of deviation probability. Numerical studies show that the proposed adaptively constrained stochastic MPC for MG optimal operation is much less conservative compared with the scenario optimization based robust MPC, and also presents a better convergence performance to the desired constraint violation level than other online update strategies.

  6. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  7. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  8. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  9. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  10. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Robust stability in constrained predictive control through the Youla parameterisations

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2011-01-01

    In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...... for hard input constraints. In more detail, we parameterise the MPC predictions in terms of the primary Youla parameter and use this parameter as the on-line optimisation variable. The uncertainty is parameterised in terms of the dual Youla parameter. Stability can then be guaranteed through small gain...

  12. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  13. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  14. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    International Nuclear Information System (INIS)

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5–4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data

  15. Constrained model predictive control for load-following operation of APR reactors

    International Nuclear Information System (INIS)

    Kim, Jae Hwan; Lee, Sim Won; Kim, Ju Hyun; Na, Man Gyun; Yu, Keuk Jong; Kim, Han Gon

    2012-01-01

    The load-following operation of APR+ reactor is needed to control the power effectively using the control rods and to restrain the reactivity control from using the boric acid for flexibility of plant operation. Usually, the reason why the disproportion of axial flux distribution occurs during load-following operation is xenon-induced oscillation. The xenon has a very high absorption cross-section and makes the impact on the reactor delayed by the iodine precursor. The power maneuvering using automatically load-following operation has advantage in terms of safety and economic operation of the reactor, so the controller has to be designed efficiently. Therefore, an advanced control method that meets the conditions such as automatic control, flexibility, safety, and convenience is necessary to load-following operation of APR+ reactor. In this paper, the constrained model predictive control (MPC) method is applied to design APR reactor's automatic load-following controller for the integrated thermal power level and axial shape index (ASI) control. Some controllers use only the current tracking command, but MPC considers future commands in addition to the current tracking command. So, MPC can achieve better tracking performance than others. Furthermore, an MPC is to used in many industrial process control systems. The basic concept of the MPC is to solve an optimization problem for a finite future time interval at present time and to implement the first optimal control input as the current control input. The KISPAC-1D code, which models the APR+ nuclear power plants, is interfaced to the proposed controller to verify the tracking performance of the reactor power level and ASI. It is known that the proposed controller exhibits very fast tracking responses

  16. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  17. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  18. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  19. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  20. Feasibility of a data-constrained prediction of hydrocarbon reservoir sandstone microstructures

    International Nuclear Information System (INIS)

    Yang, Y S; Gureyev, T E; Tulloh, A; Clennell, M B; Pervukhina, M

    2010-01-01

    Microstructures are critical for defining material characteristics such as permeability, mechanical, electrical and other physical properties. However, the available techniques for determining compositional microstructures through segmentation of x-ray computed tomography (CT) images are inadequate when there are finer structures than the CT spatial resolution, i.e. when there is more than one material in each voxel. This is the case for CT imaging of geomaterials characterized with submicron porosity and clay coating that control petrophysical properties of rock. This note outlines our data-constrained modelling (DCM) approach for prediction of compositional microstructures, and our investigation of the feasibility of determining sandstone microstructures using multiple CT data sets with different x-ray beam energies. In the DCM approach, each voxel is assumed to contain a mixture of multiple materials, optionally including voids. Our preliminary comparisons using model samples indicate that the DCM-predicted compositional microstructure is consistent with the known original microstructure under low noise conditions. The approach is quite generic and is applicable to predictions of microstructure of various materials. (technical design note)

  1. Fuzzy Constrained Predictive Optimal Control of High Speed Train with Actuator Dynamics

    Directory of Open Access Journals (Sweden)

    Xi Wang

    2016-01-01

    Full Text Available We investigate the problem of fuzzy constrained predictive optimal control of high speed train considering the effect of actuator dynamics. The dynamics feature of the high speed train is modeled as a cascade of cars connected by flexible couplers, and the formulation is mathematically transformed into a Takagi-Sugeno (T-S fuzzy model. The goal of this study is to design a state feedback control law at each decision step to enhance safety, comfort, and energy efficiency of high speed train subject to safety constraints on the control input. Based on Lyapunov stability theory, the problem of optimizing an upper bound on the cruise control cost function subject to input constraints is reduced to a convex optimization problem involving linear matrix inequalities (LMIs. Furthermore, we analyze the influences of second-order actuator dynamics on the fuzzy constrained predictive controller, which shows risk of potentially deteriorating the overall system. Employing backstepping method, an actuator compensator is proposed to accommodate for the influence of the actuator dynamics. The experimental results show that with the proposed approach high speed train can track the desired speed, the relative coupler displacement between the neighbouring cars is stable at the equilibrium state, and the influence of actuator dynamics is reduced, which demonstrate the validity and effectiveness of the proposed approaches.

  2. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  3. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  4. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  5. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  6. Constrained Predictive Control and its application to a Coupled-tanks Apparatus

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad; Kouvaritakis, Basil; Cannon, Mark

    2001-01-01

    The focus of this paper is the development and application to experimental equipment of fast constrained predictive control algorithms. A review of QP based algorithm and an alternative using interpolation and LP is considered. Despite its undemanding computational nature, the latter algorithm...

  7. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  8. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  9. Network-constrained Cournot models of liberalized electricity markets. The devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Department of Applied Economics, Sidgwick Ave., University of Cambridge, CB3 9DE (United Kingdom); Barquin, Julian; Vazquez, Miguel [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, c/Santa Cruz de Marcenado 26-28015 Madrid (Spain); Boots, Maroeska G. [Energy Research Centre of the Netherlands ECN, Badhuisweg 3, 1031 CM Amsterdam (Netherlands); Ehrenmann, Andreas [Judge Institute of Management, University of Cambridge, Trumpington Street, CB2 1AG (United Kingdom); Hobbs, Benjamin F. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Rijkers, Fieke A.M. [Contributed while at ECN, now at Nederlandse Mededingingsautoriteit (NMa), Dte, Postbus 16326, 2500 BH Den Haag (Netherlands)

    2005-05-15

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model.

  10. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    Science.gov (United States)

    Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.

    2017-07-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present

  11. Chance-constrained programming models for capital budgeting with NPV as fuzzy parameters

    Science.gov (United States)

    Huang, Xiaoxia

    2007-01-01

    In an uncertain economic environment, experts' knowledge about outlays and cash inflows of available projects consists of much vagueness instead of randomness. Investment outlays and annual net cash flows of a project are usually predicted by using experts' knowledge. Fuzzy variables can overcome the difficulties in predicting these parameters. In this paper, capital budgeting problem with fuzzy investment outlays and fuzzy annual net cash flows is studied based on credibility measure. Net present value (NPV) method is employed, and two fuzzy chance-constrained programming models for capital budgeting problem are provided. A fuzzy simulation-based genetic algorithm is provided for solving the proposed model problems. Two numerical examples are also presented to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  12. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  13. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  14. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Cambridge Univ., Dept. of Applied Economics, Cambridge (United Kingdom); Barquin, Julian; Vazquez, Miguel [Universidad Pontificia Comillas, Inst. de Investigacion Tecnologica, Madrid (Spain); Boots, Maroeska; Rijkers, Fieke A.M. [Energy Research Centre of the Netherlands ECN, Amsterdam (Netherlands); Ehrenmann, Andreas [Cambridge Univ., Judge Inst. of Management, Cambridge (United Kingdom); Hobbs, Benjamin F. [Johns Hopkins Univ., Dept. of Geography and Environmental Engineering, Baltimore, MD (United States)

    2005-05-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  15. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    International Nuclear Information System (INIS)

    Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.

    2005-01-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  16. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  17. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  18. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    Directory of Open Access Journals (Sweden)

    R. Q. Thomas

    2017-07-01

    Full Text Available Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2 concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6  ×  105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field

  19. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  1. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  2. Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks.

    Science.gov (United States)

    Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S

    2017-08-03

    Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages.

  3. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  4. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  5. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  6. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    Science.gov (United States)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  7. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  8. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    Science.gov (United States)

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  9. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  10. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  11. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, M. [Lockheed Martin Solar and Astrophysics Lab, Palo Alto, CA 94304 (United States); Manchester, W. B.; Van der Holst, B.; Sokolov, I.; Tóth, G.; Gombosi, T. I. [Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Mullinix, R. E.; Taktakishvili, A.; Chulaki, A., E-mail: jinmeng@lmsal.com, E-mail: chipm@umich.edu, E-mail: richard.e.mullinix@nasa.gov, E-mail: Aleksandre.Taktakishvili-1@nasa.gov [Community Coordinated Modeling Center, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2017-01-10

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).

  12. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  13. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  14. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  15. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  16. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  17. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  18. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  19. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  20. Robust self-triggered model predictive control for constrained discrete-time LTI systems based on homothetic tubes

    NARCIS (Netherlands)

    Aydiner, E.; Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2015-01-01

    In this paper we present a robust self-triggered model predictive control (MPC) scheme for discrete-time linear time-invariant systems subject to input and state constraints and additive disturbances. In self-triggered model predictive control, at every sampling instant an optimization problem based

  1. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  2. Using a data-constrained model of home range establishment to predict abundance in spatially heterogeneous habitats.

    Directory of Open Access Journals (Sweden)

    Mark C Vanderwel

    Full Text Available Mechanistic modelling approaches that explicitly translate from individual-scale resource selection to the distribution and abundance of a larger population may be better suited to predicting responses to spatially heterogeneous habitat alteration than commonly-used regression models. We developed an individual-based model of home range establishment that, given a mapped distribution of local habitat values, estimates species abundance by simulating the number and position of viable home ranges that can be maintained across a spatially heterogeneous area. We estimated parameters for this model from data on red-backed vole (Myodes gapperi abundances in 31 boreal forest sites in Ontario, Canada. The home range model had considerably more support from these data than both non-spatial regression models based on the same original habitat variables and a mean-abundance null model. It had nearly equivalent support to a non-spatial regression model that, like the home range model, scaled an aggregate measure of habitat value from local associations with habitat resources. The home range and habitat-value regression models gave similar predictions for vole abundance under simulations of light- and moderate-intensity partial forest harvesting, but the home range model predicted lower abundances than the regression model under high-intensity disturbance. Empirical regression-based approaches for predicting species abundance may overlook processes that affect habitat use by individuals, and often extrapolate poorly to novel habitat conditions. Mechanistic home range models that can be parameterized against abundance data from different habitats permit appropriate scaling from individual- to population-level habitat relationships, and can potentially provide better insights into responses to disturbance.

  3. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  4. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei

    2015-01-01

    Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....

  5. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  6. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  7. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    Science.gov (United States)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction

  8. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  9. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  10. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    Science.gov (United States)

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  11. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  12. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    Science.gov (United States)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental

  13. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    Science.gov (United States)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions

  14. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  15. Testing a Constrained MPC Controller in a Process Control Laboratory

    Science.gov (United States)

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  16. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  17. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  18. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  19. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    Science.gov (United States)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  20. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  1. Evaluation of HOx sources and cycling using measurement-constrained model calculations in a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated ecosystem

    Directory of Open Access Journals (Sweden)

    S. B. Henry

    2013-02-01

    Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2

  2. Model Predictive Control of Mineral Column Flotation Process

    Directory of Open Access Journals (Sweden)

    Yahui Tian

    2018-06-01

    Full Text Available Column flotation is an efficient method commonly used in the mineral industry to separate useful minerals from ores of low grade and complex mineral composition. Its main purpose is to achieve maximum recovery while ensuring desired product grade. This work addresses a model predictive control design for a mineral column flotation process modeled by a set of nonlinear coupled heterodirectional hyperbolic partial differential equations (PDEs and ordinary differential equations (ODEs, which accounts for the interconnection of well-stirred regions represented by continuous stirred tank reactors (CSTRs and transport systems given by heterodirectional hyperbolic PDEs, with these two regions combined through the PDEs’ boundaries. The model predictive control considers both optimality of the process operations and naturally present input and state/output constraints. For the discrete controller design, spatially varying steady-state profiles are obtained by linearizing the coupled ODE–PDE model, and then the discrete system is obtained by using the Cayley–Tustin time discretization transformation without any spatial discretization and/or without model reduction. The model predictive controller is designed by solving an optimization problem with input and state/output constraints as well as input disturbance to minimize the objective function, which leads to an online-solvable finite constrained quadratic regulator problem. Finally, the controller performance to keep the output at the steady state within the constraint range is demonstrated by simulation studies, and it is concluded that the optimal control scheme presented in this work makes this flotation process more efficient.

  3. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  4. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  5. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model – Part 1: Assessing the influence of constrained multi-generational ageing

    Directory of Open Access Journals (Sweden)

    S. H. Jathar

    2016-02-01

    Full Text Available Multi-generational oxidation of volatile organic compound (VOC oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1 consider only functionalization reactions but do not consider fragmentation reactions, (2 have not been constrained to experimental data and (3 are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM of Cappa and Wilson (2012, constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under

  6. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  7. Resource Management in Constrained Dynamic Situations

    Science.gov (United States)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments

  8. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization

    Directory of Open Access Journals (Sweden)

    Xiaobing Kong

    2013-01-01

    Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.

  10. Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo

    2012-01-01

    Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat...

  11. Performance Prediction of Constrained Waveform Design for Adaptive Radar

    Science.gov (United States)

    2016-11-01

    the famous Woodward quote, having a ubiquitous feeling for all radar waveform design (and performance prediction) researchers , that is found at the end...discuss research that develops performance prediction models to quantify the impact on SINR when an amplitude constraint is placed on a radar waveform...optimize the radar perfor- mance for the particular scenario and tasks. There have also been several survey papers on various topics in waveform design for

  12. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  13. Exploration of the dynamic properties of protein complexes predicted from spatially constrained protein-protein interaction networks.

    Directory of Open Access Journals (Sweden)

    Eric A Yen

    2014-05-01

    Full Text Available Protein complexes are not static, but rather highly dynamic with subunits that undergo 1-dimensional diffusion with respect to each other. Interactions within protein complexes are modulated through regulatory inputs that alter interactions and introduce new components and deplete existing components through exchange. While it is clear that the structure and function of any given protein complex is coupled to its dynamical properties, it remains a challenge to predict the possible conformations that complexes can adopt. Protein-fragment Complementation Assays detect physical interactions between protein pairs constrained to ≤8 nm from each other in living cells. This method has been used to build networks composed of 1000s of pair-wise interactions. Significantly, these networks contain a wealth of dynamic information, as the assay is fully reversible and the proteins are expressed in their natural context. In this study, we describe a method that extracts this valuable information in the form of predicted conformations, allowing the user to explore the conformational landscape, to search for structures that correlate with an activity state, and estimate the abundance of conformations in the living cell. The generator is based on a Markov Chain Monte Carlo simulation that uses the interaction dataset as input and is constrained by the physical resolution of the assay. We applied this method to an 18-member protein complex composed of the seven core proteins of the budding yeast Arp2/3 complex and 11 associated regulators and effector proteins. We generated 20,480 output structures and identified conformational states using principle component analysis. We interrogated the conformation landscape and found evidence of symmetry breaking, a mixture of likely active and inactive conformational states and dynamic exchange of the core protein Arc15 between core and regulatory components. Our method provides a novel tool for prediction and

  14. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    Science.gov (United States)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  15. Dissecting galaxy formation models with sensitivity analysis—a new approach to constrain the Milky Way formation history

    International Nuclear Information System (INIS)

    Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.

    2014-01-01

    We present an application of a statistical tool known as sensitivity analysis to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological N-body simulations. We show how a sensitivity analysis can be performed on our chemo-dynamical model, ChemTreeN, to characterize and quantify its relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs (i.e., observational predictions) of interest. Conversely, sensitivity analysis allows us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated with the Milky Way, such as the luminosity function of the dwarf satellites. The results from the sensitivity analysis are used to train specific model emulators of ChemTreeN, only involving the most relevant input parameters. This allowed us to efficiently explore the input parameter space. A statistical comparison of model outputs and real observables is used to obtain a 'best-fitting' parameter set. We consider different Milky-Way-like dark matter halos to account for the dependence of the best-fitting parameter selection process on the underlying merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one of the resulting stellar halo models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis, it is possible to disregard certain models, and their

  16. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  17. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  18. Predicting Land-Ice Retreat and Sea-Level Rise with the Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Lipscomb, William [Los Alamos National Laboratory

    2012-06-19

    Coastal stakeholders need defensible predictions of 21st century sea-level rise (SLR). IPCC assessments suggest 21st century SLR of {approx}0.5 m under aggressive emission scenarios. Semi-empirical models project SLR of {approx}1 m or more by 2100. Although some sea-level contributions are fairly well constrained by models, others are highly uncertain. Recent studies suggest a potential large contribution ({approx}0.5 m/century) from the marine-based West Antarctic Ice Sheet, linked to changes in Southern Ocean wind stress. To assess the likelihood of fast retreat of marine ice sheets, we need coupled ice-sheet/ocean models that do not yet exist (but are well under way). CESM is uniquely positioned to provide integrated, physics based sea-level predictions.

  19. Adaptive Model Predictive Vibration Control of a Cantilever Beam with Real-Time Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Gergely Takács

    2014-01-01

    Full Text Available This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.

  20. Criticisms and defences of the balance-of-payments constrained growth model: some old, some new

    Directory of Open Access Journals (Sweden)

    John S.L. McCombie

    2011-12-01

    Full Text Available This paper assesses various critiques that have been levelled over the years against Thirlwall’s Law and the balance-of-payments constrained growth model. It starts by assessing the criticisms that the law is largely capturing an identity; that the law of one price renders the model incoherent; and that statistical testing using cross-country data rejects the hypothesis that the actual and the balance-of-payments equilibrium growth rates are the same. It goes on to consider the argument that calculations of the “constant-market-shares” income elasticities of demand for exports demonstrate that the UK (and by implication other advanced countries could not have been balance-of-payments constrained in the early postwar period. Next Krugman’s interpretation of the law (or what he terms the “45-degree rule”, which is at variance with the usual demand-oriented explanation, is examined. The paper next assesses attempts to reconcile the demand and supply side of the model and examines whether or not the balance-of-payments constrained growth model is subject to the fallacy of composition. It concludes that none of these criticisms invalidate the model, which remains a powerful explanation of why growth rates differ.

  1. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  2. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  3. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    Science.gov (United States)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  4. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  5. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  6. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  7. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    Science.gov (United States)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely

  8. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  9. Mechanistic variables can enhance predictive models of endotherm distributions: The American pika under current, past, and future climates

    Science.gov (United States)

    Mathewson, Paul; Moyer-Horner, Lucas; Beever, Erik; Briscoe, Natalie; Kearney, Michael T.; Yahn, Jeremiah; Porter, Warren P.

    2017-01-01

    How climate constrains species’ distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8–19% less habitat loss in response to annual temperature increases of ~3–5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect

  10. Mechanistic variables can enhance predictive models of endotherm distributions: the American pika under current, past, and future climates.

    Science.gov (United States)

    Mathewson, Paul D; Moyer-Horner, Lucas; Beever, Erik A; Briscoe, Natalie J; Kearney, Michael; Yahn, Jeremiah M; Porter, Warren P

    2017-03-01

    How climate constrains species' distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8-19% less habitat loss in response to annual temperature increases of ~3-5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect

  11. Nonfragile Robust Model Predictive Control for Uncertain Constrained Systems with Time-Delay Compensation

    Directory of Open Access Journals (Sweden)

    Wei Jiang

    2016-01-01

    Full Text Available This study investigates the problem of asymptotic stabilization for a class of discrete-time linear uncertain time-delayed systems with input constraints. Parametric uncertainty is assumed to be structured, and delay is assumed to be known. In Lyapunov stability theory framework, two synthesis schemes of designing nonfragile robust model predictive control (RMPC with time-delay compensation are put forward, where the additive and the multiplicative gain perturbations are, respectively, considered. First, by designing appropriate Lyapunov-Krasovskii (L-K functions, the robust performance index is defined as optimization problems that minimize upper bounds of infinite horizon cost function. Then, to guarantee closed-loop stability, the sufficient conditions for the existence of desired nonfragile RMPC are obtained in terms of linear matrix inequalities (LMIs. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approaches.

  12. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  13. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  14. Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop

    CERN Document Server

    Blanco-Viñuela, E; De Prada-Moraga, C

    1999-01-01

    The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...

  15. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  16. Multi-example feature-constrained back-projection method for image super-resolution

    Institute of Scientific and Technical Information of China (English)

    Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li

    2017-01-01

    Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.

  17. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  18. Inexact Multistage Stochastic Chance Constrained Programming Model for Water Resources Management under Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2017-01-01

    Full Text Available In order to formulate water allocation schemes under uncertainties in the water resources management systems, an inexact multistage stochastic chance constrained programming (IMSCCP model is proposed. The model integrates stochastic chance constrained programming, multistage stochastic programming, and inexact stochastic programming within a general optimization framework to handle the uncertainties occurring in both constraints and objective. These uncertainties are expressed as probability distributions, interval with multiply distributed stochastic boundaries, dynamic features of the long-term water allocation plans, and so on. Compared with the existing inexact multistage stochastic programming, the IMSCCP can be used to assess more system risks and handle more complicated uncertainties in water resources management systems. The IMSCCP model is applied to a hypothetical case study of water resources management. In order to construct an approximate solution for the model, a hybrid algorithm, which incorporates stochastic simulation, back propagation neural network, and genetic algorithm, is proposed. The results show that the optimal value represents the maximal net system benefit achieved with a given confidence level under chance constraints, and the solutions provide optimal water allocation schemes to multiple users over a multiperiod planning horizon.

  19. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  20. Constraining viscous dark energy models with the latest cosmological data

    Science.gov (United States)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  1. Less favourable climates constrain demographic strategies in plants.

    Science.gov (United States)

    Csergő, Anna M; Salguero-Gómez, Roberto; Broennimann, Olivier; Coutts, Shaun R; Guisan, Antoine; Angert, Amy L; Welk, Erik; Stott, Iain; Enquist, Brian J; McGill, Brian; Svenning, Jens-Christian; Violle, Cyrille; Buckley, Yvonne M

    2017-08-01

    Correlative species distribution models are based on the observed relationship between species' occurrence and macroclimate or other environmental variables. In climates predicted less favourable populations are expected to decline, and in favourable climates they are expected to persist. However, little comparative empirical support exists for a relationship between predicted climate suitability and population performance. We found that the performance of 93 populations of 34 plant species worldwide - as measured by in situ population growth rate, its temporal variation and extinction risk - was not correlated with climate suitability. However, correlations of demographic processes underpinning population performance with climate suitability indicated both resistance and vulnerability pathways of population responses to climate: in less suitable climates, plants experienced greater retrogression (resistance pathway) and greater variability in some demographic rates (vulnerability pathway). While a range of demographic strategies occur within species' climatic niches, demographic strategies are more constrained in climates predicted to be less suitable. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  2. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  3. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  4. Constrained quadratic stabilization of discrete-time uncertain nonlinear multi-model systems using piecewise affine state-feedback

    Directory of Open Access Journals (Sweden)

    Olav Slupphaug

    1999-07-01

    Full Text Available In this paper a method for nonlinear robust stabilization based on solving a bilinear matrix inequality (BMI feasibility problem is developed. Robustness against model uncertainty is handled. In different non-overlapping regions of the state-space called clusters the plant is assumed to be an element in a polytope which vertices (local models are affine systems. In the clusters containing the origin in their closure, the local models are restricted to be linear systems. The clusters cover the region of interest in the state-space. An affine state-feedback is associated with each cluster. By utilizing the affinity of the local models and the state-feedback, a set of linear matrix inequalities (LMIs combined with a single nonconvex BMI are obtained which, if feasible, guarantee quadratic stability of the origin of the closed-loop. The feasibility problem is attacked by a branch-and-bound based global approach. If the feasibility check is successful, the Liapunov matrix and the piecewise affine state-feedback are given directly by the feasible solution. Control constraints are shown to be representable by LMIs or BMIs, and an application of the control design method to robustify constrained nonlinear model predictive control is presented. Also, the control design method is applied to a simple example.

  5. Hybrid Active/Passive Control of Sound Radiation from Panels with Constrained Layer Damping and Model Predictive Feedback Control

    Science.gov (United States)

    Cabell, Randolph H.; Gibbs, Gary P.

    2000-01-01

    make the controller adaptive. For example, a mathematical model of the plant could be periodically updated as the plant changes, and the feedback gains recomputed from the updated model. To be practical, this approach requires a simple plant model that can be updated quickly with reasonable computational requirements. A recent paper by the authors discussed one way to simplify a feedback controller, by reducing the number of actuators and sensors needed for good performance. The work was done on a tensioned aircraft-style panel excited on one side by TBL flow in a low speed wind tunnel. Actuation was provided by a piezoelectric (PZT) actuator mounted on the center of the panel. For sensing, the responses of four accelerometers, positioned to approximate the response of the first radiation mode of the panel, were summed and fed back through the controller. This single input-single output topology was found to have nearly the same noise reduction performance as a controller with fifteen accelerometers and three PZT patches. This paper extends the previous results by looking at how constrained layer damping (CLD) on a panel can be used to enhance the performance of the feedback controller thus providing a more robust and efficient hybrid active/passive system. The eventual goal is to use the CLD to reduce sound radiation at high frequencies, then implement a very simple, reduced order, low sample rate adaptive controller to attenuate sound radiation at low frequencies. Additionally this added damping smoothes phase transitions over the bandwidth which promotes robustness to natural frequency shifts. Experiments were conducted in a transmission loss facility on a clamped-clamped aluminum panel driven on one side by a loudspeaker. A generalized predictive control (GPC) algorithm, which is suited to online adaptation of its parameters, was used in single input-single output and multiple input-single output configurations. Because this was a preliminary look at the potential

  6. Constraining supersymmetric models using Higgs physics, precision observables and direct searches

    International Nuclear Information System (INIS)

    Zeune, Lisa

    2014-08-01

    We present various complementary possibilities to exploit experimental measurements in order to test and constrain supersymmetric (SUSY) models. Direct searches for SUSY particles have not resulted in any signal so far, and limits on the SUSY parameter space have been set. Measurements of the properties of the observed Higgs boson at ∝126 GeV as well as of the W boson mass (M W ) can provide valuable indirect constraints, supplementing the ones from direct searches. This thesis is divided into three major parts: In the first part we present the currently most precise prediction for M W in the Minimal Supersymmetric Standard Model (MSSM) with complex parameters and in the Next-to-Minimal Supersymmetric Standard Model (NMSSM). The evaluation includes the full one-loop result and all relevant available higher order corrections of Standard Model (SM) and SUSY type. We perform a detailed scan over the MSSM parameter space, taking into account the latest experimental results, including the observation of a Higgs signal. We find that the current measurements for M W and the top quark mass (m t ) slightly favour a non-zero SUSY contribution. The impact of different SUSY sectors on the prediction of M W as well as the size of the higher-order SUSY corrections are analysed both in the MSSM and the NMSSM. We investigate the genuine NMSSM contribution from the extended Higgs and neutralino sectors and highlight differences between the M W predictions in the two SUSY models. In the second part of the thesis we discuss possible interpretations of the observed Higgs signal in SUSY models. The properties of the observed Higgs boson are compatible with the SM so far, but many other interpretations are also possible. Performing scans over the relevant parts of the MSSM and the NMSSM parameter spaces and applying relevant constraints from Higgs searches, flavour physics and electroweak measurements, we find that a Higgs boson at ∝126 GeV, which decays into two photons, can in

  7. Security-constrained self-scheduling of generation companies in day-ahead electricity markets considering financial risk

    International Nuclear Information System (INIS)

    Amjady, Nima; Vahidinasab, Vahid

    2013-01-01

    Highlights: ► A security-constrained self-scheduling is presented. ► The proposed framework takes into account the uncertainty of the predicted market prices. ► We model the risk and profit tradeoff of a GENCO based on an efficient multi-objective model. ► Unit commitment and inter-temporal constraints of generators are considered in an MIP model. ► Simulation results are presented on the IEEE 30-bus and IEEE 118-bus test systems. - Abstract: In this paper, a new security-constrained self-scheduling framework incorporating the transmission flow limits in both steady state conditions and post-contingent states is presented to produce efficient bidding strategy for generation companies (GENCOs) in day-ahead electricity markets. Moreover, the proposed framework takes into account the uncertainty of the predicted market prices and models the risk and profit tradeoff of a GENCO based on an efficient multi-objective model. Furthermore, unit commitment and inter-temporal constraints of generators are considered in the suggested model converting it to a mixed-integer programming (MIP) optimization problem. Sensitivity of the proposed framework with respect to both the level of the market prices and adopted risk level is also evaluated in the paper. Simulation results are presented on the IEEE 30-bus and IEEE 118-bus test systems illustrating the performance of the proposed self-scheduling model.

  8. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  9. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  10. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  11. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method...

  12. Constraining viscous dark energy models with the latest cosmological data

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Yan, Yang-Jie; Meng, Xin-He [Nankai University, Department of Physics, Tianjin (China)

    2017-10-15

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H{sub 0} tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios. (orig.)

  13. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  14. Balancing computation and communication power in power constrained clusters

    Science.gov (United States)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    2018-05-29

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agent may be configured to reassign the unused power to the active nodes to expedite workload processing.

  15. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  16. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  17. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  18. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  19. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  20. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  1. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  2. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  3. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  4. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    Science.gov (United States)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  5. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    Science.gov (United States)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  6. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  7. A Three-Dimensional Model of the Marine Nitrogen Cycle during the Last Glacial Maximum Constrained by Sedimentary Isotopes

    Directory of Open Access Journals (Sweden)

    Christopher J. Somes

    2017-05-01

    Full Text Available Nitrogen is a key limiting nutrient that influences marine productivity and carbon sequestration in the ocean via the biological pump. In this study, we present the first estimates of nitrogen cycling in a coupled 3D ocean-biogeochemistry-isotope model forced with realistic boundary conditions from the Last Glacial Maximum (LGM ~21,000 years before present constrained by nitrogen isotopes. The model predicts a large decrease in nitrogen loss rates due to higher oxygen concentrations in the thermocline and sea level drop, and, as a response, reduced nitrogen fixation. Model experiments are performed to evaluate effects of hypothesized increases of atmospheric iron fluxes and oceanic phosphorus inventory relative to present-day conditions. Enhanced atmospheric iron deposition, which is required to reproduce observations, fuels export production in the Southern Ocean causing increased deep ocean nutrient storage. This reduces transport of preformed nutrients to the tropics via mode waters, thereby decreasing productivity, oxygen deficient zones, and water column N-loss there. A larger global phosphorus inventory up to 15% cannot be excluded from the currently available nitrogen isotope data. It stimulates additional nitrogen fixation that increases the global oceanic nitrogen inventory, productivity, and water column N-loss. Among our sensitivity simulations, the best agreements with nitrogen isotope data from LGM sediments indicate that water column and sedimentary N-loss were reduced by 17–62% and 35–69%, respectively, relative to preindustrial values. Our model demonstrates that multiple processes alter the nitrogen isotopic signal in most locations, which creates large uncertainties when quantitatively constraining individual nitrogen cycling processes. One key uncertainty is nitrogen fixation, which decreases by 25–65% in the model during the LGM mainly in response to reduced N-loss, due to the lack of observations in the open ocean most

  8. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  9. Global Optimization of Ventricular Myocyte Model to Multi-Variable Objective Improves Predictions of Drug-Induced Torsades de Pointes

    Directory of Open Access Journals (Sweden)

    Trine Krogh-Madsen

    2017-12-01

    Full Text Available In silico cardiac myocyte models present powerful tools for drug safety testing and for predicting phenotypical consequences of ion channel mutations, but their accuracy is sometimes limited. For example, several models describing human ventricular electrophysiology perform poorly when simulating effects of long QT mutations. Model optimization represents one way of obtaining models with stronger predictive power. Using a recent human ventricular myocyte model, we demonstrate that model optimization to clinical long QT data, in conjunction with physiologically-based bounds on intracellular calcium and sodium concentrations, better constrains model parameters. To determine if the model optimized to congenital long QT data better predicts risk of drug-induced long QT arrhythmogenesis, in particular Torsades de Pointes risk, we tested the optimized model against a database of known arrhythmogenic and non-arrhythmogenic ion channel blockers. When doing so, the optimized model provided an improved risk assessment. In particular, we demonstrate an elimination of false-positive outcomes generated by the baseline model, in which simulations of non-torsadogenic drugs, in particular verapamil, predict action potential prolongation. Our results underscore the importance of currents beyond those directly impacted by a drug block in determining torsadogenic risk. Our study also highlights the need for rich data in cardiac myocyte model optimization and substantiates such optimization as a method to generate models with higher accuracy of predictions of drug-induced cardiotoxicity.

  10. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  11. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  12. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  13. Kovacs effect and fluctuation-dissipation relations in 1D kinetically constrained models

    International Nuclear Information System (INIS)

    Buhot, Arnaud

    2003-01-01

    Strong and fragile glass relaxation behaviours are obtained simply changing the constraints of the kinetically constrained Ising chain from symmetric to purely asymmetric. We study the out-of-equilibrium dynamics of these two models focusing on the Kovacs effect and the fluctuation-dissipation (FD) relations. The Kovacs or memory effect, commonly observed in structural glasses, is present for both constraints but enhanced with the asymmetric ones. Most surprisingly, the related FD relations satisfy the FD theorem in both cases. This result strongly differs from the simple quenching procedure where the asymmetric model presents strong deviations from the FD theorem

  14. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    -constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology...... of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights...

  15. Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles

    Science.gov (United States)

    2014-03-27

    predicts forces and moments for the class of flapping wing fliers that makes up most insects and hummingbirds. Large bird and butterfly “clap- and...Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles DISSERTATION Garrison J. Lindholm, Captain, USAF AFIT-ENY-DS-14-M-02 DEPARTMENT...States Air Force, Department of Defense, or the United States Government. AFIT-ENY-DS-14-M-02 Closed-Loop Control of Constrained Flapping Wing Micro Air

  16. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    Science.gov (United States)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  17. A multidimensional stability model for predicting shallow landslide size and shape across landscapes.

    Science.gov (United States)

    Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E

    2014-11-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.

  18. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  19. A carbon risk prediction model for Chinese heavy-polluting industrial enterprises based on support vector machine

    International Nuclear Information System (INIS)

    Zhou, Zhifang; Xiao, Tian; Chen, Xiaohong; Wang, Chang

    2016-01-01

    Chinese heavy-polluting industrial enterprises, especially petrochemical or chemical industry, labeled low carbon efficiency and high emission load, are facing the tremendous pressure of emission reduction under the background of global shortage of energy supply and constrain of carbon emission. However, due to the limited amount of theoretic and practical research in this field, problems like lacking prediction indicators or models, and the quantified standard of carbon risk remain unsolved. In this paper, the connotation of carbon risk and an assessment index system for Chinese heavy-polluting industrial enterprises (eg. coal enterprise, petrochemical enterprises, chemical enterprises et al.) based on support vector machine are presented. By using several heavy-polluting industrial enterprises’ related data, SVM model is trained to predict the carbon risk level of a specific enterprise, which allows the enterprise to identify and manage its carbon risks. The result shows that this method can predict enterprise’s carbon risk level in an efficient, accurate way with high practical application and generalization value.

  20. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    Science.gov (United States)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  1. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  2. A cost-constrained model of strategic service quality emphasis in nursing homes.

    Science.gov (United States)

    Davis, M A; Provan, K G

    1996-02-01

    This study employed structural equation modeling to test the relationship between three aspects of the environmental context of nursing homes; Medicaid dependence, ownership status, and market demand, and two basic strategic orientations: low cost and differentiation based on service quality emphasis. Hypotheses were proposed and tested against data collected from a sample of nursing homes operating in a single state. Because of the overwhelming importance of cost control in the nursing home industry, a cost constrained strategy perspective was supported. Specifically, while the three contextual variables had no direct effect on service quality emphasis, the entire model was supported when cost control orientation was introduced as a mediating variable.

  3. Predicting sintering deformation of ceramic film constrained by rigid substrate using anisotropic constitutive law

    International Nuclear Information System (INIS)

    Li Fan; Pan Jingzhe; Guillon, Olivier; Cocks, Alan

    2010-01-01

    Sintering of ceramic films on a solid substrate is an important technology for fabricating a range of products, including solid oxide fuel cells, micro-electronic PZT films and protective coatings. There is clear evidence that the constrained sintering process is anisotropic in nature. This paper presents a study of the constrained sintering deformation using an anisotropic constitutive law. The state of the material is described using the sintering strains rather than the relative density. In the limiting case of free sintering, the constitutive law reduces to a conventional isotropic constitutive law. The anisotropic constitutive law is used to calculate sintering deformation of a constrained film bonded to a rigid substrate and the compressive stress required in a sinter-forging experiment to achieve zero lateral shrinkage. The results are compared with experimental data in the literature. It is shown that the anisotropic constitutive law can capture the behaviour of the materials observed in the sintering experiments.

  4. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  5. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  6. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  7. Gluon field strength correlation functions within a constrained instanton model

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Esaibegyan, S.V.; Maximov, A.E.; Mikhailov, S.V.

    2000-01-01

    We suggest a constrained instanton (CI) solution in the physical QCD vacuum which is described by large-scale vacuum field fluctuations. This solution decays exponentially at large distances. It is stable only if the interaction of the instanton with the background vacuum field is small and additional constraints are introduced. The CI solution is explicitly constructed in the ansatz form, and the two-point vacuum correlator of the gluon field strengths is calculated in the framework of the effective instanton vacuum model. At small distances the results are qualitatively similar to the single instanton case; in particular, the D 1 invariant structure is small, which is in agreement with the lattice calculations. (orig.)

  8. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    Science.gov (United States)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1

  9. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  10. A Homogeneous and Self-Dual Interior-Point Linear Programming Algorithm for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders

    2015-01-01

    We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... system of equations solved in homogeneous and self-dual IPMs. Fast convergence is further achieved using a warm-start strategy. We implement the algorithm in MATLAB and C. Its performance is tested using a conceptual power management case study. Closed loop simulations show that 1) the proposed algorithm...

  11. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  12. Development and validation of a dynamic outcome prediction model for paracetamol-induced acute liver failure

    DEFF Research Database (Denmark)

    Bernal, William; Wang, Yanzhong; Maggs, James

    2016-01-01

    : The models developed here show very good discrimination and calibration, confirmed in independent datasets, and suggest that many patients undergoing transplantation based on existing criteria might have survived with medical management alone. The role and indications for emergency liver transplantation......BACKGROUND: Early, accurate prediction of survival is central to management of patients with paracetamol-induced acute liver failure to identify those needing emergency liver transplantation. Current prognostic tools are confounded by recent improvements in outcome independent of emergency liver...... transplantation, and constrained by static binary outcome prediction. We aimed to develop a simple prognostic tool to reflect current outcomes and generate a dynamic updated estimation of risk of death. METHODS: Patients with paracetamol-induced acute liver failure managed at intensive care units in the UK...

  13. Predictability of Subsurface Temperature and the AMOC

    Science.gov (United States)

    Chang, Y.; Schubert, S. D.

    2013-12-01

    GEOS 5 coupled model is extensively used for experimental decadal climate prediction. Understanding the limits of decadal ocean predictability is critical for making progress in these efforts. Using this model, we study the subsurface temperature initial value predictability, the variability of the Atlantic meridional overturning circulation (AMOC) and its impacts on the global climate. Our approach is to utilize the idealized data assimilation technology developed at the GMAO. The technique 'replay' allows us to assess, for example, the impact of the surface wind stresses and/or precipitation on the ocean in a very well controlled environment. By running the coupled model in replay mode we can in fact constrain the model using any existing reanalysis data set. We replay the model constraining (nudging) it to the MERRA reanalysis in various fields from 1948-2012. The fields, u,v,T,q,ps, are adjusted towards the 6-hourly analyzed fields in atmosphere. The simulated AMOC variability is studied with a 400-year-long segment of replay integration. The 84 cases of 10-year hindcasts are initialized from 4 different replay cycles. Here, the variability and predictability are examined further by a measure to quantify how much the subsurface temperature and AMOC variability has been influenced by atmospheric forcing and by ocean internal variability. The simulated impact of the AMOC on the multi-decadal variability of the SST, sea surface height (SSH) and sea ice extent is also studied.

  14. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  15. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  16. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun

    2017-10-29

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  17. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun; Belabbes, Abderrezak; Manchon, Aurelien

    2017-01-01

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  18. Constraining groundwater flow model with geochemistry in the FUA and Cabril sites. Use in the ENRESA 2000 PA exercise

    International Nuclear Information System (INIS)

    Samper, J.; Carrera, J.; Bajos, C.; Astudillo, J.; Santiago, J.L.

    1999-01-01

    Hydrogeochemical activities have been a key factor for the verification and constraining of the groundwater flow model developed for the safety assessment of the FUA Uranium mill tailings restoration and the Cabril L/ILW disposal facility. The lesson learned in both sites will be applied to the ground water transport modelling in the current PA exercises (ENRESA 2000). The groundwater flow model in the Cabril site, represents a low permeability fractured media, and was performed using the TRANSIN code series developed by UPC-ENRESA. The hydrogeochemical data obtained from systematic yearly sampling and analysis campaigns were successfully applied to distinguish between local and regional flow and young and old groundwater. The salinity content, mainly the chlorine anion content, was the most critical hydrogeochemical data for constraining the groundwater flow model. (author)

  19. Predicting Antitumor Activity of Peptides by Consensus of Regression Models Trained on a Small Data Sample

    Directory of Open Access Journals (Sweden)

    Ivanka Jerić

    2011-11-01

    Full Text Available Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.

  20. Predicting oil and gas compositional yields via chemical structure-chemical yield modeling (CS-CYM): Part 1 - Concepts and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Freund, H.; Walters, C.C.; Kelemen, S.R.; Siskin, M.; Gorbaty, M.L.; Curry, D.J.; Bence, A.E. [ExxonMobil Research & Engineering Co., Annandale, NJ (United States)

    2007-07-01

    We have developed a method to calculate the amounts and composition of products resulting from the thermal decomposition of a solid complex carbonaceous material. This procedure provides a means of using laboratory measurements of complex carbonaceous solids to construct a representative model of its chemical structure (CS) that is then coupled with elementary reaction pathways to predict the chemical yield (CY) upon thermal decomposition. Data from elemental analysis, H, N, O, S, solid state {sup 13}C NMR, X-ray photoelectron spectroscopy (XPS), sulfur X-ray absorption structure spectroscopy (XANES), and pyrolysis-gas chromatography (GC) are used to constrain the construction of core molecular structures representative of the complex carbonaceous material. These core structures are expanded stochastically to describe large macromolecules ({gt} 10{sup 6} cores with similar to 10{sup 6} atoms) with bulk properties that match the experimental results. Gas, liquid and solid product yields, resulting from thermal decomposition, are calculated by identifying reactive functional groups within the CS stochastic ensemble and imposing a reaction network constrained by fundamental thermodynamics and kinetics. An expulsion model is added to the decomposition model to calculate the chemical products in open and closed systems. Product yields may then be predicted under a wide range of time-temperature conditions used in rapid laboratory pyrolysis experiments, refinery processes, or geologic maturation.

  1. Construction of ground-state preserving sparse lattice models for predictive materials simulations

    Science.gov (United States)

    Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand

    2017-08-01

    First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.

  2. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  3. Comparison of stochastic and regression based methods for quantification of predictive uncertainty of model-simulated wellhead protection zones in heterogeneous aquifers

    DEFF Research Database (Denmark)

    Christensen, Steen; Moore, C.; Doherty, J.

    2006-01-01

    accurate and required a few hundred model calls to be computed. (b) The linearized regression-based interval (Cooley, 2004) required just over a hundred model calls and also appeared to be nearly correct. (c) The calibration-constrained Monte-Carlo interval (Doherty, 2003) was found to be narrower than......For a synthetic case we computed three types of individual prediction intervals for the location of the aquifer entry point of a particle that moves through a heterogeneous aquifer and ends up in a pumping well. (a) The nonlinear regression-based interval (Cooley, 2004) was found to be nearly...... the regression-based intervals but required about half a million model calls. It is unclear whether or not this type of prediction interval is accurate....

  4. Toward Process-resolving Synthesis and Prediction of Arctic Climate Change Using the Regional Arctic System Model

    Science.gov (United States)

    Maslowski, W.

    2017-12-01

    The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.

  5. A Kinematic Model of Slow Slip Constrained by Tremor-Derived Slip Histories in Cascadia

    Science.gov (United States)

    Schmidt, D. A.; Houston, H.

    2016-12-01

    We explore new ways to constrain the kinematic slip distributions for large slow slip events using constraints from tremor. Our goal is to prescribe one or more slip pulses that propagate across the fault and scale appropriately to satisfy the observations. Recent work (Houston, 2015) inferred a crude representative stress time history at an average point using the tidal stress history, the static stress drop, and the timing of the evolution of tidal sensitivity of tremor over several days of slip. To convert a stress time history into a slip time history, we use simulations to explore the stressing history of a small locked patch due to an approaching rupture front. We assume that the locked patch releases strain through a series of tremor bursts whose activity rate is related to the stressing history. To test whether the functional form of a slip pulse is reasonable, we assume a hypothetical slip time history (Ohnaka pulse) timed with the occurrence of tremor to create a rupture front that propagates along the fault. The duration of the rupture front for a fault patch is constrained by the observed tremor catalog for the 2010 ETS event. The slip amplitude is scaled appropriately to match the observed surface displacements from GPS. Through a forward simulation, we evaluate the ability of the tremor-derived slip history to accurately predict the pattern of surface displacements observed by GPS. We find that the temporal progression of surface displacements are well modeled by a 2-4 day slip pulse, suggesting that some of the longer duration of slip typically found in time-dependent GPS inversions is biased by the temporal smoothing. However, at some locations on the fault, the tremor lingers beyond the passage of the slip pulse. A small percentage (5-10%) of the tremor appears to be activated ahead of the approaching slip pulse, and tremor asperities experience a driving stress on the order of 10 kPa/day. Tremor amplitude, rather than just tremor counts, is needed

  6. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  7. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  8. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    Science.gov (United States)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  9. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  10. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    International Nuclear Information System (INIS)

    Lu, Yu; Wechsler, Risa H.; Somerville, Rachel S.; Croton, Darren; Porter, Lauren; Primack, Joel; Moody, Chris; Behroozi, Peter S.; Ferguson, Henry C.; Koo, David C.; Guo, Yicheng; Safarzadeh, Mohammadtaher; White, Catherine E.; Finlator, Kristian; Castellano, Marco; Sommariva, Veronica

    2014-01-01

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  11. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yu; Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Somerville, Rachel S. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Croton, Darren [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, VIC 3122 (Australia); Porter, Lauren; Primack, Joel; Moody, Chris [Department of Physics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Behroozi, Peter S.; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Koo, David C.; Guo, Yicheng [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Safarzadeh, Mohammadtaher; White, Catherine E. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Finlator, Kristian [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen (Denmark); Castellano, Marco; Sommariva, Veronica, E-mail: luyu@stanford.edu, E-mail: rwechsler@stanford.edu [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monteporzio (Italy)

    2014-11-10

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  12. Nonlinear predictive control in the LHC accelerator

    CERN Document Server

    Blanco, E; Cristea, S; Casas, J

    2009-01-01

    This paper describes the application of a nonlinear model-based control strategy in a real challenging process. A predictive controller based on a nonlinear model derived from physical relationships, mainly heat and mass balances, has been developed and commissioned in the inner triplet heat exchanger unit (IT-HXTU) of the large hadron collider (LHC) particle accelerator at European Center for Nuclear Research (CERN). The advanced regulation\\ maintains the magnets temperature at about 1.9 K. The development includes a constrained nonlinear state estimator with a receding horizon estimation procedure to improve the regulator predictions.

  13. Constrained Fuzzy Predictive Control Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Oussama Ait Sahed

    2015-01-01

    Full Text Available A fuzzy predictive controller using particle swarm optimization (PSO approach is proposed. The aim is to develop an efficient algorithm that is able to handle the relatively complex optimization problem with minimal computational time. This can be achieved using reduced population size and small number of iterations. In this algorithm, instead of using the uniform distribution as in the conventional PSO algorithm, the initial particles positions are distributed according to the normal distribution law, within the area around the best position. The radius limiting this area is adaptively changed according to the tracking error values. Moreover, the choice of the initial best position is based on prior knowledge about the search space landscape and the fact that in most practical applications the dynamic optimization problem changes are gradual. The efficiency of the proposed control algorithm is evaluated by considering the control of the model of a 4 × 4 Multi-Input Multi-Output industrial boiler. This model is characterized by being nonlinear with high interactions between its inputs and outputs, having a nonminimum phase behaviour, and containing instabilities and time delays. The obtained results are compared to those of the control algorithms based on the conventional PSO and the linear approach.

  14. CP asymmetries in penguin-dominated, hadronic B{sub d} decays: Constraining new physics at NLO

    Energy Technology Data Exchange (ETDEWEB)

    Vickers, Stefan [Excellence Cluster Universe, TU Muenchen (Germany)

    2012-07-01

    CP asymmetries in penguin-dominated, hadronic B{sub d} decays into CP eigenstates ({pi}, {eta}, {eta}', {phi}, {omega}, {rho})Ks are predicted to be small in the standard model. These observables will be measured in future facilities (Belle II, SuperB) with very high precision and therefore could be used to test CP violating couplings beyond the Standard Model. We investigate such additional contributions for a general class of models in the framework of QCD factorization at next-to-leading order precision. As an example, we demonstrate how these observables can constrain the parameter space of a generic modification of the Z-penguin.

  15. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  16. Predicting the ungauged basin: Model validation and realism assessment

    Directory of Open Access Journals (Sweden)

    Tim evan Emmerik

    2015-10-01

    Full Text Available The hydrological decade on Predictions in Ungauged Basins (PUB led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  17. Prediction during sentence comprehension in aphasia

    Directory of Open Access Journals (Sweden)

    Michael Walsh Dickey

    2014-04-01

    Full Text Available Much recent psycholinguistic work has focused on prediction in language comprehension (Altmann & Kamide, 1999; Federmeier, 2007; Levy, 2008. Unimpaired adults predict upcoming words and phrases based on material in the preceding context, like verbs (Altmann & Kamide, 1999 or constraining sentence contexts (Federmeier, 2007. Several models have tied rapid prediction to the language production system (Federmeier, 2007; Pickering & Garrod, 2013; Dell & Chang, 2014. Evidence for this link comes from that fact that older adults with lower verbal fluency show less predictive behavior (Federmeier, et al., 2010; DeLong, et al., 2012. Prediction in aphasic language comprehension has not been widely investigated, even though constraining sentence contexts are strongly facilitative for naming in aphasia (e.g., Love & Webb, 1977. Mack, et al. (2013 found in a visual-world task that people with aphasia (PWA do not predict upcoming objects based on verbs (cf. Altmann & Kamide, 1999. This finding suggests that prediction may be reduced in aphasia. However, it is unclear whether reduced prediction was caused by language-production impairments: all the PWA in their study had non-fluent aphasia. The current study examined whether PWA show evidence of prediction based on constraining sentence contexts (e.g., Federmeier, 2007. Specifically, it tested whether they exhibited facilitation for highly predictable words in reading, using materials that have previously demonstrated strong predictability effects for unimpaired adults (Rayner, et al., 2004. In addition, it tested whether differences in language-production ability among PWA accounted for differences in predictive behavior (viz. Pickering & Garrod, 2013; Dell & Chang, 2014. Eight PWA read sentences adapted from Rayner, et al. (2004 in a self-paced reading task. The materials crossed word frequency with predictability: high- vs. low-frequency words (bottle/diaper were preceded by contexts which made them

  18. Use of stratigraphic models as soft information to constrain stochastic modeling of rock properties: Development of the GSLIB-Lynx integration module

    International Nuclear Information System (INIS)

    Cromer, M.V.; Rautman, C.A.

    1995-10-01

    Rock properties in volcanic units at Yucca Mountain are controlled largely by relatively deterministic geologic processes related to the emplacement, cooling, and alteration history of the tuffaceous lithologic sequence. Differences in the lithologic character of the rocks have been used to subdivide the rock sequence into stratigraphic units, and the deterministic nature of the processes responsible for the character of the different units can be used to infer the rock material properties likely to exist in unsampled regions. This report proposes a quantitative, theoretically justified method of integrating interpretive geometric models, showing the three-dimensional distribution of different stratigraphic units, with numerical stochastic simulation techniques drawn from geostatistics. This integration of soft, constraining geologic information with hard, quantitative measurements of various material properties can produce geologically reasonable, spatially correlated models of rock properties that are free from stochastic artifacts for use in subsequent physical-process modeling, such as the numerical representation of ground-water flow and radionuclide transport. Prototype modeling conducted using the GSLIB-Lynx Integration Module computer program, known as GLINTMOD, has successfully demonstrated the proposed integration technique. The method involves the selection of stratigraphic-unit-specific material-property expected values that are then used to constrain the probability function from which a material property of interest at an unsampled location is simulated

  19. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    Science.gov (United States)

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  20. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  1. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  2. Constraining estimates of methane emissions from Arctic permafrost regions with CARVE

    Science.gov (United States)

    Chang, R. Y.; Karion, A.; Sweeney, C.; Henderson, J.; Mountain, M.; Eluszkiewicz, J.; Luus, K. A.; Lin, J. C.; Dinardo, S.; Miller, C. E.; Wofsy, S. C.

    2013-12-01

    Permafrost in the Arctic contains large carbon pools that are currently non-labile, but can be released to the atmosphere as polar regions warm. In order to predict future climate scenarios, we need to understand the emissions of these greenhouse gases under varying environmental conditions. This study presents in-situ measurements of methane made on board an aircraft during the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE), which sampled over the permafrost regions of Alaska. Using measurements from May to September 2012, seasonal emission rate estimates of methane from tundra are constrained using the Stochastic Time-Inverted Lagrangian Transport model, a Lagrangian particle dispersion model driven by custom polar-WRF fields. Preliminary results suggest that methane emission rates have not greatly increased since the Arctic Boundary Layer Experiment conducted in southwest Alaska in 1988.

  3. The Balance-of-Payments-Constrained Growth Model and the Limits to Export-Led Growth

    Directory of Open Access Journals (Sweden)

    Robert A. Blecker

    2000-12-01

    Full Text Available This paper discusses how A. P. Thirlwall's model of balance-of-payments-constrained growth can be adapted to analyze the idea of a "fallacy of composition" in the export-led growth strategy of many developing countries. The Deaton-Muellbauer model of the Almost Ideal Demand System (AIDS is used to represent the adding-up constraints on individual countries' exports, when they are all trying to export competing products to the same foreign markets (i.e. newly industrializing countries are exporting similar types of manufactured goods to the OECD countries. The relevance of the model to the recent financial crises in developing countries and policy alternatives for redirecting development strategies are also discussed.

  4. Predicting in vivo glioma growth with the reaction diffusion equation constrained by quantitative magnetic resonance imaging data

    International Nuclear Information System (INIS)

    Hormuth II, David A; Weis, Jared A; Barnes, Stephanie L; Miga, Michael I; Yankeelov, Thomas E; Rericha, Erin C; Quaranta, Vito

    2015-01-01

    Reaction–diffusion models have been widely used to model glioma growth. However, it has not been shown how accurately this model can predict future tumor status using model parameters (i.e., tumor cell diffusion and proliferation) estimated from quantitative in vivo imaging data. To this end, we used in silico studies to develop the methods needed to accurately estimate tumor specific reaction–diffusion model parameters, and then tested the accuracy with which these parameters can predict future growth. The analogous study was then performed in a murine model of glioma growth. The parameter estimation approach was tested using an in silico tumor ‘grown’ for ten days as dictated by the reaction–diffusion equation. Parameters were estimated from early time points and used to predict subsequent growth. Prediction accuracy was assessed at global (total volume and Dice value) and local (concordance correlation coefficient, CCC) levels. Guided by the in silico study, rats (n = 9) with C6 gliomas, imaged with diffusion weighted magnetic resonance imaging, were used to evaluate the model’s accuracy for predicting in vivo tumor growth. The in silico study resulted in low global (tumor volume error 0.92) and local (CCC values >0.80) level errors for predictions up to six days into the future. The in vivo study showed higher global (tumor volume error >11.7%, Dice <0.81) and higher local (CCC <0.33) level errors over the same time period. The in silico study shows that model parameters can be accurately estimated and used to accurately predict future tumor growth at both the global and local scale. However, the poor predictive accuracy in the experimental study suggests the reaction–diffusion equation is an incomplete description of in vivo C6 glioma biology and may require further modeling of intra-tumor interactions including segmentation of (for example) proliferative and necrotic regions. (paper)

  5. An algorithm for mass matrix calculation of internally constrained molecular geometries

    International Nuclear Information System (INIS)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-01

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model

  6. An algorithm for mass matrix calculation of internally constrained molecular geometries.

    Science.gov (United States)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-28

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.

  7. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  8. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  9. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  10. Constrained structural dynamic model verification using free vehicle suspension testing methods

    Science.gov (United States)

    Blair, Mark A.; Vadlamudi, Nagarjuna

    1988-01-01

    Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.

  11. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  12. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  13. An inverse dynamics model for the analysis, reconstruction and prediction of bipedal walking

    NARCIS (Netherlands)

    Koopman, Hubertus F.J.M.; Grootenboer, H.J.; de Jongh, Henk J.; Huijing, P.A.J.B.M.; de Vries, J.

    1995-01-01

    Walking is a constrained movement which may best be observed during the double stance phase when both feet contact the floor. When analyzing a measured movement with an inverse dynamics model, a violation of these constrains will always occur due to measuring errors and deviations of the segments

  14. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  15. The Wally plot approach to assess the calibration of clinical prediction models.

    Science.gov (United States)

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  16. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  17. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  18. The making of the minibody: an engineered beta-protein for the display of conformationally constrained peptides.

    Science.gov (United States)

    Tramontano, A; Bianchi, E; Venturini, S; Martin, F; Pessi, A; Sollazzo, M

    1994-03-01

    Conformationally constraining selectable peptides onto a suitable scaffold that enables their conformation to be predicted or readily determined by experimental techniques would considerably boost the drug discovery process by reducing the gap between the discovery of a peptide lead and the design of a peptidomimetic with a more desirable pharmacological profile. With this in mind, we designed the minibody, a 61-residue beta-protein aimed at retaining some desirable features of immunoglobulin variable domains, such as tolerance to sequence variability in selected regions of the protein and predictability of the main chain conformation of the same regions, based on the 'canonical structures' model. To test the ability of the minibody scaffold to support functional sites we also designed a metal binding version of the protein by suitably choosing the sequences of its loops. The minibody was produced both by chemical synthesis and expression in E. coli and characterized by size exclusion chromatography, UV CD (circular dichroism) spectroscopy and metal binding activity. All our data supported the model, but a more detailed structural characterization of the molecule was impaired by its low solubility. We were able to overcome this problem both by further mutagenesis of the framework and by addition of a solubilizing motif. The minibody is being used to select constrained human IL-6 peptidic ligands from a library displayed on the surface of the f1 bacteriophage.

  19. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  20. Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems

    OpenAIRE

    Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.

    2009-01-01

    A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

  1. Constraining the interaction between dark sectors with future HI intensity mapping observations

    Science.gov (United States)

    Xu, Xiaodong; Ma, Yin-Zhe; Weltman, Amanda

    2018-04-01

    We study a model of interacting dark matter and dark energy, in which the two components are coupled. We calculate the predictions for the 21-cm intensity mapping power spectra, and forecast the detectability with future single-dish intensity mapping surveys (BINGO, FAST and SKA-I). Since dark energy is turned on at z ˜1 , which falls into the sensitivity range of these radio surveys, the HI intensity mapping technique is an efficient tool to constrain the interaction. By comparing with current constraints on dark sector interactions, we find that future radio surveys will produce tight and reliable constraints on the coupling parameters.

  2. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  3. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  4. Constraining the dark energy models with H (z ) data: An approach independent of H0

    Science.gov (United States)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  5. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  6. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  7. A Constrained Standard Model: Effects of Fayet-Iliopoulos Terms

    International Nuclear Information System (INIS)

    Barbieri, Riccardo; Hall, Lawrence J.; Nomura, Yasunori

    2001-01-01

    In (1)the one Higgs doublet standard model was obtained by an orbifold projection of a 5D supersymmetric theory in an essentially unique way, resulting in a prediction for the Higgs mass m H = 127 +- 8 GeV and for the compactification scale 1/R = 370 +- 70 GeV. The dominant one loop contribution to the Higgs potential was found to be finite, while the above uncertainties arose from quadratically divergent brane Z factors and from other higher loop contributions. In (3), a quadratically divergent Fayet-Iliopoulos term was found at one loop in this theory. We show that the resulting uncertainties in the predictions for the Higgs boson mass and the compactification scale are small, about 25percent of the uncertainties quoted above, and hence do not affect the original predictions. However, a tree level brane Fayet-Iliopoulos term could, if large enough, modify these predictions, especially for 1/R.

  8. Analysis of explicit model predictive control for path-following control.

    Science.gov (United States)

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  9. Analysis of explicit model predictive control for path-following control

    Science.gov (United States)

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  10. Models of Eucalypt phenology predict bat population flux.

    Science.gov (United States)

    Giles, John R; Plowright, Raina K; Eby, Peggy; Peel, Alison J; McCallum, Hamish

    2016-10-01

    Fruit bats (Pteropodidae) have received increased attention after the recent emergence of notable viral pathogens of bat origin. Their vagility hinders data collection on abundance and distribution, which constrains modeling efforts and our understanding of bat ecology, viral dynamics, and spillover. We addressed this knowledge gap with models and data on the occurrence and abundance of nectarivorous fruit bat populations at 3 day roosts in southeast Queensland. We used environmental drivers of nectar production as predictors and explored relationships between bat abundance and virus spillover. Specifically, we developed several novel modeling tools motivated by complexities of fruit bat foraging ecology, including: (1) a dataset of spatial variables comprising Eucalypt-focused vegetation indices, cumulative precipitation, and temperature anomaly; (2) an algorithm that associated bat population response with spatial covariates in a spatially and temporally relevant way given our current understanding of bat foraging behavior; and (3) a thorough statistical learning approach to finding optimal covariate combinations. We identified covariates that classify fruit bat occupancy at each of our three study roosts with 86-93% accuracy. Negative binomial models explained 43-53% of the variation in observed abundance across roosts. Our models suggest that spatiotemporal heterogeneity in Eucalypt-based food resources could drive at least 50% of bat population behavior at the landscape scale. We found that 13 spillover events were observed within the foraging range of our study roosts, and they occurred during times when models predicted low population abundance. Our results suggest that, in southeast Queensland, spillover may not be driven by large aggregations of fruit bats attracted by nectar-based resources, but rather by behavior of smaller resident subpopulations. Our models and data integrated remote sensing and statistical learning to make inferences on bat ecology

  11. How will greenhouse gas emissions from motor vehicles be constrained in China around 2030?

    International Nuclear Information System (INIS)

    Zheng, Bo; Zhang, Qiang; Borken-Kleefeld, Jens; Huo, Hong; Guan, Dabo; Klimont, Zbigniew; Peters, Glen P.; He, Kebin

    2015-01-01

    Highlights: • We build a projection model to predict vehicular GHG emissions on provincial basis. • Fuel efficiency gains cannot constrain vehicle GHGs in major southern provinces. • We propose an integrated policy set through sensitivity analysis of policy options. • The policy set will peak GHG emissions of 90% provinces and whole China by 2030. - Abstract: Increasing emissions from road transportation endanger China’s objective to reduce national greenhouse gas (GHG) emissions. The unconstrained growth of vehicle GHG emissions are mainly caused by the insufficient improvement of energy efficiency (kilometers traveled per unit energy use) under current policies, which cannot offset the explosion of vehicle activity in China, especially the major southern provinces. More stringent polices are required to decline GHG emissions in these provinces, and thereby help to constrain national total emissions. In this work, we make a provincial-level projection for vehicle growth, energy demand and GHG emissions to evaluate vehicle GHG emission trends under various policy options in China and determine the way to constrain national emissions. Through sensitivity analysis of various single policies, we propose an integrated policy set to assure the objective of peak national vehicle GHG emissions be achieved around 2030. The integrated policy involves decreasing the use of urban light-duty vehicles by 25%, improving fuel economy by 25% by 2035 comparing 2020, and promoting electric vehicles and biofuels. The stringent new policies would allow China to constrain GHG emissions from road transport sector around 2030. This work provides a perspective to understand vehicle GHG emission growth patterns in China’s provinces, and proposes a strong policy combination to constrain national GHG emissions, which can support the achievement of peak GHG emissions by 2030 promised by the Chinese government

  12. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus

    2013-07-01

    In terrestrial biosphere models, key biochemical controls on carbon uptake by vegetation canopies are typically assigned fixed literature-based values for broad categories of vegetation types although in reality significant spatial and temporal variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based retrievals of leaf chlorophyll for estimating leaf photosynthetic capacity and for constraining model simulations of water and carbon fluxes. © 2013 IEEE.

  13. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    Science.gov (United States)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  14. Improving volcanic ash predictions with the HYSPLIT dispersion model by assimilating MODIS satellite retrievals

    Science.gov (United States)

    Chai, Tianfeng; Crawford, Alice; Stunder, Barbara; Pavolonis, Michael J.; Draxler, Roland; Stein, Ariel

    2017-02-01

    Currently, the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) runs the HYSPLIT dispersion model with a unit mass release rate to predict the transport and dispersion of volcanic ash. The model predictions provide information for the Volcanic Ash Advisory Centers (VAAC) to issue advisories to meteorological watch offices, area control centers, flight information centers, and others. This research aims to provide quantitative forecasts of ash distributions generated by objectively and optimally estimating the volcanic ash source strengths, vertical distribution, and temporal variations using an observation-modeling inversion technique. In this top-down approach, a cost functional is defined to quantify the differences between the model predictions and the satellite measurements of column-integrated ash concentrations weighted by the model and observation uncertainties. Minimizing this cost functional by adjusting the sources provides the volcanic ash emission estimates. As an example, MODIS (Moderate Resolution Imaging Spectroradiometer) satellite retrievals of the 2008 Kasatochi volcanic ash clouds are used to test the HYSPLIT volcanic ash inverse system. Because the satellite retrievals include the ash cloud top height but not the bottom height, there are different model diagnostic choices for comparing the model results with the observed mass loadings. Three options are presented and tested. Although the emission estimates vary significantly with different options, the subsequent model predictions with the different release estimates all show decent skill when evaluated against the unassimilated satellite observations at later times. Among the three options, integrating over three model layers yields slightly better results than integrating from the surface up to the observed volcanic ash cloud top or using a single model layer. Inverse tests also show that including the ash-free region to constrain the model is not

  15. Constraining the Q10 of respiration in water-limited environments

    Science.gov (United States)

    Collins, A.; Ryan, M. G.; Xu, C.; Grossiord, C.; Michaletz, S. T.; McDowell, N. G.

    2016-12-01

    If the current rate of greenhouse emissions remains constant over the next few decades, projections of climate change forecast increased atmospheric temperatures by a least 1.1°C by the end of the century. Warmer temperatures are expected to largely influence the exchange of energy, carbon and water between plants and the atmosphere. Several studies support that terrestrial ecosystems currently act as a major carbon sink, however warmer temperatures may amplify respiration processes and shift terrestrial ecosystems from a sink to a source of carbon in the future. Most Earth System Models incorporate the temperature dependence of plant respiration (Q10) to estimate and predict respiration processes and associated carbon fluxes. Using a temperature and precipitation manipulation experiment in natural conditions, we present evidence that this parameter is poorly constrained especially in water-limited environments. We discuss the utility of the Q10 framework and suggest improvements for this parameter along with trait-based approaches to better resolve models.

  16. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  17. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions

    Science.gov (United States)

    Carpenter, J. Russell; Markley, F. Landis

    2012-01-01

    We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the

  19. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  20. Constraining the mSUGRA parameter space through entropy and abundance criteria

    International Nuclear Information System (INIS)

    Cabral-Rosetti, Luis G.; Mondragon, Myriam; Nunez, Dario; Sussman, Roberto A.; Zavala, Jesus; Nellen, Lukas

    2007-01-01

    We explore the use of two criteria to constrain the allowed parameter space in mSUGRA models; both criteria are based in the calculation of the present density of neutralinos χ0 as Dark Matter in the Universe. The first one is the usual ''abundance'' criterion that requieres that present neutralino relic density complies with 0.0945 < ΩCDMh2 < 0.1287, which are the 2σ bounds according to WMAP. To calculate the relic density we use the public numerical code micrOMEGAS. The second criterion is the original idea presented in [3] that basically applies the microcanonical definition of entropy to a weakly interacting and self-gravitating gas, and then evaluate the change in entropy per particle of this gas between the freeze-out era and present day virialized structures. An 'entropy consistency' criterion emerges by comparing theoretical and empirical estimates of this entropy. One of the objetives of the work is to analyze the joint application of both criteria, already done in [3], to see if their results, using approximations for the calculations of the relic density, agree with the results coming from the exact numerical results of micrOMEGAS. The main objetive of the work is to use this method to constrain the parameter space in mSUGRA models that are inputs for the calculations of micrOMEGAS, and thus to get some bounds on the predictions for the SUSY spectra

  1. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  2. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  3. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  4. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  5. On the convergence of the dynamic series solution of a constrained ...

    African Journals Online (AJOL)

    The one dimensional problem of analysing the dynamic behaviour of an elevated water tower with elastic deflection–control device and subjected to a dynamic load was examined in [2]. The constrained elastic system was modeled as a column carrying a concentrated mass at its top and elastically constrained at a point ...

  6. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  7. Constraining the ensemble Kalman filter for improved streamflow forecasting

    Science.gov (United States)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also

  8. Off-Line Robust Constrained MPC for Linear Time-Varying Systems with Persistent Disturbances

    Directory of Open Access Journals (Sweden)

    P. Bumroongsri

    2014-01-01

    Full Text Available An off-line robust constrained model predictive control (MPC algorithm for linear time-varying (LTV systems is developed. A novel feature is the fact that both model uncertainty and bounded additive disturbance are explicitly taken into account in the off-line formulation of MPC. In order to reduce the on-line computational burdens, a sequence of explicit control laws corresponding to a sequence of positively invariant sets is computed off-line. At each sampling time, the smallest positively invariant set containing the measured state is determined and the corresponding control law is implemented in the process. The proposed MPC algorithm can guarantee robust stability while ensuring the satisfaction of input and output constraints. The effectiveness of the proposed MPC algorithm is illustrated by two examples.

  9. A multidisciplinary approach to constrain incoming plate hydration in the Central American Margin

    Science.gov (United States)

    Hu, Y.; Guild, M. R.; Naif, S.; Eimer, M. O.; Evans, O.; Fornash, K.; Plank, T. A.; Shillington, D. J.; Vervelidou, F.; Warren, J. M.; Wiens, D.

    2017-12-01

    The oceanic crust and mantle of the incoming plate are potentially the greatest source of water to the subduction zone, but their extent of hydration is poorly constrained. Hydrothermal alteration of the oceanic crust is an important source of mineral-bound water that ultimately dehydrates during subduction. Bend faults at the trench-outer rise provide another viable mechanism to further hydrate the down-going plate. Here, we take a multidisciplinary approach to constrain the fluid budget of the subducting plate at the Northern Central American margin; this site was chosen since it has an unusually wet subducting slab at the Nicaragua segment. Abundant geophysical and geochemical datasets are available for this region and this work is an analysis of these data. Controlled-source electromagnetic (CSEM) and wide-angle seismic (WAS) observations show significant resistivity and velocity reductions in the incoming oceanic crust associated with bend faults, which suggests seawater infiltration and hydrous alteration. We used the CSEM porosity constraints to predict P-wave velocity and find that the WAS data require an additional reduction of up to 0.3 km/s in the lower crust at the trench, equivalent to 2 wt% H2O. We implemented the porosity structure together with constraints on fluid flow and reaction kinetics into two-phase flow numerical models to quantify the degree of serpentinization possible relative to WAS estimates. Thermodynamic modeling of basalt and peridotite bulk compositions were used to predict the alteration assemblages and associated water contents in the bend faulting region as well as the dehydration fluxes during subduction. In Nicaragua, the major fluid pulse at sub-arc depths results from chlorite and antigorite breakdown in the upper 10 km of the slab mantle, whereas in Costa Rica, the slab mantle is not predicted to dehydrate at sub-arc depths. In addition, comparisons between observed and predicted magnetic anomalies and geochemical variations

  10. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  11. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  13. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    Science.gov (United States)

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  14. A constrained maximization formulation to analyze deformation of fiber reinforced elastomeric actuators

    Science.gov (United States)

    Singh, Gaurav; Krishnan, Girish

    2017-06-01

    Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.

  15. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    Science.gov (United States)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  16. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    Science.gov (United States)

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  17. A chance-constrained stochastic approach to intermodal container routing problems.

    Science.gov (United States)

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  18. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  19. A supply function model for representing the strategic bidding of the producers in constrained electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Napoli, Roberto; Lu, Wene; Jiang, Xiuchen

    2010-01-01

    The modeling of the bidding behaviour of the producer is a key-point in the modeling and simulation of the competitive electricity markets. In our paper, the linear supply function model is applied so as to find the Supply Function Equilibrium analytically. It also proposed a new and efficient approach to find SFEs for the network constrained electricity markets by finding the best slope of the supply function with the help of changing the intercept, and the method can be applied on the large systems. The approach proposed is applied to study IEEE-118 bus test systems and the comparison between bidding slope and bidding intercept is presented, as well, with reference to the test system. (author)

  20. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  1. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  2. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  3. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  4. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  5. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  6. Multiple Observation Types Jointly Constrain Terrestrial Carbon and Water Cycles

    Science.gov (United States)

    Raupach, M. R.; Haverd, V.; Briggs, P. R.; Canadell, J.; Davis, S. J.; Isaac, P. R.; Law, R.; Meyer, M.; Peters, G. P.; Pickett Heaps, C.; Roxburgh, S. H.; Sherman, B.; van Gorsel, E.; Viscarra Rossel, R.; Wang, Z.

    2012-12-01

    Information about the carbon cycle potentially constrains the water cycle, and vice versa. This paper explores the utility of multiple observation sets to constrain carbon and water fluxes and stores in a land surface model, and a resulting determination of the Australian terrestrial carbon budget. Observations include streamflow from 416 gauged catchments, measurements of evapotranspiration (ET) and net ecosystem production (NEP) from 12 eddy-flux sites, litterfall data, and data on carbon pools. The model is a version of CABLE (the Community Atmosphere-Biosphere-Land Exchange model), coupled with CASAcnp (a biogeochemical model) and SLI (Soil-Litter-Iso, a soil hydrology model including liquid and vapour water fluxes and the effects of litter). By projecting observation-prediction residuals onto model uncertainty, we find that eddy flux measurements provide a significantly tighter constraint on Australian continental net primary production (NPP) than the other data types. However, simultaneous constraint by multiple data types is important for mitigating bias from any single type. Results emerging from the multiply-constrained model are as follows (with all values applying over 1990-2011 and all ranges denoting ±1 standard error): (1) on the Australian continent, a predominantly semi-arid region, over half (0.64±0.05) of the water loss through ET occurs through soil evaporation and bypasses plants entirely; (2) mean Australian NPP is 2200±400 TgC/y, making the NPP/precipitation ratio about the same for Australia as the global land average; (3) annually cyclic ("grassy") vegetation and persistent ("woody") vegetation respectively account for 0.56±0.14 and 0.43±0.14 of NPP across Australia; (4) the average interannual variability of Australia's NEP (±180 TgC/y) is larger than Australia's total anthropogenic greenhouse gas emissions in 2011 (149 TgCeq/y), and is dominated by variability in desert and savannah regions. The mean carbon budget over 1990

  7. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  8. Commitment Versus Persuasion in the Three-Party Constrained Voter Model

    Science.gov (United States)

    Mobilia, Mauro

    2013-04-01

    In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.

  9. Dark matter in a constrained E6 inspired SUSY model

    International Nuclear Information System (INIS)

    Athron, P.; Harries, D.; Nevzorov, R.; Williams, A.G.

    2016-01-01

    We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.

  10. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  11. Estimation of p,p'-DDT degradation in soil by modeling and constraining hydrological and biogeochemical controls.

    Science.gov (United States)

    Sanka, Ondrej; Kalina, Jiri; Lin, Yan; Deutscher, Jan; Futter, Martyn; Butterfield, Dan; Melymuk, Lisa; Brabec, Karel; Nizzetto, Luca

    2018-08-01

    Despite not being used for decades in most countries, DDT remains ubiquitous in soils due to its persistence and intense past usage. Because of this it is still a pollutant of high global concern. Assessing long term dissipation of DDT from this reservoir is fundamental to understand future environmental and human exposure. Despite a large research effort, key properties controlling fate in soil (in particular, the degradation half-life (τ soil )) are far from being fully quantified. This paper describes a case study in a large central European catchment where hundreds of measurements of p,p'-DDT concentrations in air, soil, river water and sediment are available for the last two decades. The goal was to deliver an integrated estimation of τ soil by constraining a state-of-the-art hydrobiogeochemical-multimedia fate model of the catchment against the full body of empirical data available for this area. The INCA-Contaminants model was used for this scope. Good predictive performance against an (external) dataset of water and sediment concentrations was achieved with partitioning properties taken from the literature and τ soil estimates obtained from forcing the model against empirical historical data of p,p'-DDT in the catchment multicompartments. This approach allowed estimation of p,p'-DDT degradation in soil after taking adequate consideration of losses due to runoff and volatilization. Estimated τ soil ranged over 3000-3800 days. Degradation was the most important loss process, accounting on a yearly basis for more than 90% of the total dissipation. The total dissipation flux from the catchment soils was one order of magnitude higher than the total current atmospheric input estimated from atmospheric concentrations, suggesting that the bulk of p,p'-DDT currently being remobilized or lost is essentially that accumulated over two decades ago. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  13. Advance in prediction of soil slope instabilities

    Science.gov (United States)

    Sigarán-Loría, C.; Hack, R.; Nieuwenhuis, J. D.

    2012-04-01

    Six generic soils (clays and sands) were systematically modeled with plane-strain finite elements (FE) at varying heights and inclinations. A dataset was generated in order to develop predictive relations of soil slope instabilities, in terms of co-seismic displacements (u), under strong motions with a linear multiple regression. For simplicity, the seismic loads are monochromatic artificial sinusoidal functions at four frequencies: 1, 2, 4, and 6 Hz, and the slope failure criterion used corresponds to near 10% Cartesian shear strains along a continuous region comparable to a slip surface. The generated dataset comprises variables from the slope geometry and site conditions: height, H, inclination, i, shear wave velocity from the upper 30 m, vs30, site period, Ts; as well as the input strong motion: yield acceleration, ay (equal to peak ground acceleration, PGA in this research), frequency, f; and in some cases moment magnitude, M, and Arias intensity, Ia, assumed from empirical correlations. Different datasets or scenarios were created: "Magnitude-independent", "Magnitude-dependent", and "Soil-dependent", and the data was statistically explored and analyzed with varying mathematical forms. Qualitative relations show that the permanent deformations are highly related to the soil class for the clay slopes, but not for the sand slopes. Furthermore, the slope height does not constrain the variability in the co-seismic displacements. The input frequency decreases the variability of the co-seismic displacements for the "Magnitude-dependent" and "Soil-dependent" datasets. The empirical models were developed with two and three predictors. For the sands it was not possible because they could not satisfy the constrains from the statistical method. For the clays, the best models with the smallest errors coincided with the simple general form of multiple regression with three predictors (e.g. near 0.16 and 0.21 standard error, S.E. and 0.75 and 0.55 R2 for the "M

  14. GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake

    Science.gov (United States)

    Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.

    2011-10-01

    The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided

  15. Robust stability in predictive control with soft constraints

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    In this paper we take advantage of the primary and dual Youla parameterizations for setting up a soft constrained model predictive control (MPC) scheme for which stability is guaranteed in face of norm-bounded uncertainties. Under special conditions guarantees are also given for hard input...... constraints. In more detail, we parameterize the MPC predictions in terms of the primary Youla parameter and use this parameter as the online optimization variable. The uncertainty is parameterized in terms of the dual Youla parameter. Stability can then be guaranteed through small gain arguments on the loop...

  16. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  17. Predictive control, with restrictions for the climate of a greenhouse

    International Nuclear Information System (INIS)

    Pinon, Sandra; Pena, Miguel; Kuchen, Benjamin

    2002-01-01

    A proposal for controlling nonlinear systems under constraints is presented. a combination of model predictive control and feedback linearization is used. An alternative that uses extended kalman filter as non-measured variable estimator is applied for performing the constrained optimization. Finally, an observability analysis is done in closed loop in order to demonstrate observer convergence

  18. Constraining Lyman continuum escape using Machine Learning

    Science.gov (United States)

    Giri, Sambit K.; Zackrisson, Erik; Binggeli, Christian; Pelckmans, Kristiaan; Cubo, Rubén; Mellema, Garrelt

    2018-05-01

    The James Webb Space Telescope (JWST) will observe the rest-frame ultraviolet/optical spectra of galaxies from the epoch of reionization (EoR) in unprecedented detail. While escaping into the intergalactic medium, hydrogen-ionizing (Lyman continuum; LyC) photons from the galaxies will contribute to the bluer end of the UV slope and make nebular emission lines less prominent. We present a method to constrain leakage of the LyC photons using the spectra of high redshift (z >~ 6) galaxies. We simulate JWST/NIRSpec observations of galaxies at z =6-9 by matching the fluxes of galaxies observed in the Frontier Fields observations of galaxy cluster MACS-J0416. Our method predicts the escape fraction fesc with a mean absolute error Δfesc ~ 0.14. The method also predicts the redshifts of the galaxies with an error .

  19. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators.

    Science.gov (United States)

    Backović, Mihailo; Krämer, Michael; Maltoni, Fabio; Martini, Antony; Mawatari, Kentarou; Pellen, Mathieu

    Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s -channel mediators. We implement such models in the FeynRules/MadGraph5_aMC@NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s -channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties.

  20. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators

    Energy Technology Data Exchange (ETDEWEB)

    Backović, Mihailo [Centre for Cosmology, Particle Physics and Phenomenology (CP3), Université catholique de Louvain, 1348, Louvain-la-Neuve (Belgium); Krämer, Michael [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University, 52056, Aachen (Germany); Maltoni, Fabio; Martini, Antony [Centre for Cosmology, Particle Physics and Phenomenology (CP3), Université catholique de Louvain, 1348, Louvain-la-Neuve (Belgium); Mawatari, Kentarou, E-mail: kentarou.mawatari@vub.ac.be [Theoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel, and International Solvay Institutes, Pleinlaan 2, 1050, Brussels (Belgium); Pellen, Mathieu [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University, 52056, Aachen (Germany)

    2015-10-07

    Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s-channel mediators. We implement such models in the FeynRules/MadGraph5{sub a}MC@NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s-channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties.

  1. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators

    Energy Technology Data Exchange (ETDEWEB)

    Backovic, Mihailo; Maltoni, Fabio; Martini, Antony [Universite catholique de Louvain, Centre for Cosmology, Particle Physics and Phenomenology (CP3), Louvain-la-Neuve (Belgium); Kraemer, Michael; Pellen, Mathieu [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany); Mawatari, Kentarou [Theoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel, and International Solvay Institutes, Brussels (Belgium)

    2015-10-15

    Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s-channel mediators. We implement such models in the FeynRules/MadGraph5{sub a}MC rate at NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s-channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties. (orig.)

  2. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators

    International Nuclear Information System (INIS)

    Backovic, Mihailo; Maltoni, Fabio; Martini, Antony; Kraemer, Michael; Pellen, Mathieu; Mawatari, Kentarou

    2015-01-01

    Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s-channel mediators. We implement such models in the FeynRules/MadGraph5 a MC rate at NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s-channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties. (orig.)

  3. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  4. Robust and Efficient Constrained DFT Molecular Dynamics Approach for Biochemical Modeling

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Levy, B.; Demachy, I.; de la Lande, A.

    2012-01-01

    Roč. 8, č. 2 (2012), s. 418-427 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : constrained density functional the ory * electron transfer * density fitting Subject RIV: CF - Physical ; The oretical Chemistry Impact factor: 5.389, year: 2012

  5. Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial

    Directory of Open Access Journals (Sweden)

    A. Robinson

    2011-04-01

    Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ18O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.

  6. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization

    Directory of Open Access Journals (Sweden)

    Zhongqi Wang

    2016-01-01

    Full Text Available Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  7. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  8. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  9. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  10. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    Science.gov (United States)

    Dossett, Jason; Hu, Bin; Parkinson, David

    2014-03-01

    In order to explain cosmic acceleration without invoking ``dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B0, we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B0 by an order of magnitude, giving log10(B0) explanation.

  11. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    Science.gov (United States)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  12. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  13. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  14. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  15. A Study of Interactions between Mixing and Chemical Reaction Using the Rate-Controlled Constrained-Equilibrium Method

    Science.gov (United States)

    Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed

    2016-10-01

    The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.

  16. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  17. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  18. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  19. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  20. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  1. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  2. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    Science.gov (United States)

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  4. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  5. A Dantzig-Wolfe decomposition algorithm for linear economic model predictive control of dynamically decoupled subsystems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Standardi, Laura; Edlund, Kristian

    2014-01-01

    This paper presents a warm-started Dantzig–Wolfe decomposition algorithm tailored to economic model predictive control of dynamically decoupled subsystems. We formulate the constrained optimal control problem solved at each sampling instant as a linear program with state space constraints, input...... limits, input rate limits, and soft output limits. The objective function of the linear program is related directly to the cost of operating the subsystems, and the cost of violating the soft output constraints. Simulations for large-scale economic power dispatch problems show that the proposed algorithm...... is significantly faster than both state-of-the-art linear programming solvers, and a structure exploiting implementation of the alternating direction method of multipliers. It is also demonstrated that the control strategy presented in this paper can be tuned using a weighted ℓ1-regularization term...

  6. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  7. Constraining the magnitude of the largest event in a foreshock-main shock-aftershock sequence

    Science.gov (United States)

    Shcherbakov, Robert; Zhuang, Jiancang; Ogata, Yosihiko

    2018-01-01

    Extreme value statistics and Bayesian methods are used to constrain the magnitudes of the largest expected earthquakes in a sequence governed by the parametric time-dependent occurrence rate and frequency-magnitude statistics. The Bayesian predictive distribution for the magnitude of the largest event in a sequence is derived. Two types of sequences are considered, that is, the classical aftershock sequences generated by large main shocks and the aftershocks generated by large foreshocks preceding a main shock. For the former sequences, the early aftershocks during a training time interval are used to constrain the magnitude of the future extreme event during the forecasting time interval. For the latter sequences, the earthquakes preceding the main shock are used to constrain the magnitudes of the subsequent extreme events including the main shock. The analysis is applied retrospectively to past prominent earthquake sequences.

  8. Multiple Observation Types Jointly Constrain Australian Terrestrial Carbon and Water Cycles

    Science.gov (United States)

    Haverd, Vanessa; Raupach, Michael; Briggs, Peter; Canadell, Pep; Davis, Steven; Isaac, Peter; Law, Rachel; Meyer, Mick; Peters, Glenn; Pickett-Heaps, Christopher; Roxburgh, Stephen; Sherman, Bradford; van Gorsel, Eva; Viscarra Rossel, Raphael; Wang, Ziyuan

    2013-04-01

    Information about the carbon cycle potentially constrains the water cycle, and vice versa. This paper explores the utility of multiple observation sets to constrain carbon and water fluxes and stores in a land surface model, and a resulting determination of the Australian terrestrial carbon budget. Observations include streamflow from 416 gauged catchments, measurements of evapotranspiration (ET) and net ecosystem production (NEP) from 12 eddy-flux sites, litterfall data, and data on carbon pools. The model is a version of CABLE (the Community Atmosphere-Biosphere-Land Exchange model), coupled with CASAcnp (a biogeochemical model) and SLI (Soil-Litter-Iso, a soil hydrology model including liquid and vapour water fluxes and the effects of litter). By projecting observation-prediction residuals onto model uncertainty, we find that eddy flux measurements provide a significantly tighter constraint on Australian continental net primary production (NPP) than the other data types. However, simultaneous constraint by multiple data types is important for mitigating bias from any single type. Results emerging from the multiply-constrained model are as follows (with all values applying over 1990-2011 and all ranges denoting ±1 standard error): (1) on the Australian continent, a predominantly semi-arid region, over half (0.64±0.05) of the water loss through ET occurs through soil evaporation and bypasses plants entirely; (2) mean Australian NPP is 2200±400 TgC/y, making the NPP/precipitation ratio about the same for Australia as the global land average; (3) annually cyclic ("grassy") vegetation and persistent ("woody") vegetation respectively account for 0.56±0.14 and 0.43±0.14 of NPP across Australia; (4) the average interannual variability of Australia's NEP (±180 TgC/y) is larger than Australia's total anthropogenic greenhouse gas emissions in 2011 (149 TgCeq/y), and is dominated by variability in desert and savannah regions. The mean carbon budget over 1990

  9. Constraining friction, dilatancy and effective stress with earthquake rates in the deep crust

    Science.gov (United States)

    Beeler, N. M.; Thomas, A.; Burgmann, R.; Shelly, D. R.

    2015-12-01

    Similar to their behavior on the deep extent of some subduction zones, families of recurring low-frequency earthquakes (LFE) within zones of non-volcanic tremor on the San Andreas fault in central California show strong sensitivity to stresses induced by the tides. Taking all of the LFE families collectively, LFEs occur at all levels of the daily tidal stress, and are in phase with the very small, ~200 Pa, shear stress amplitudes while being uncorrelated with the ~2 kPa tidal normal stresses. Following previous work we assume LFE sources are small, persistent regions that repeatedly fail during shear within a much larger scale, otherwise aseismically creeping fault zone and that the correlation of LFE occurrence reflects modulation of the fault creep rate by the tidal stresses. We examine the predictions of laboratory-observed rate-dependent dilatancy associated with frictional slip. The effect of dilatancy hardening is to damp the slip rate, so high dilatancy under undrained pore pressure reduces modulation of slip rate by the tides. The undrained end-member model produces: 1) no sensitivity to the tidal normal stress, as first suggested in this context by Hawthorne and Rubin [2010], and 2) fault creep rate in phase with the tidal shear stress. Room temperature laboratory-observed values of the dilatancy and friction coefficients for talc, an extremely weak and weakly dilatant material, under-predict the observed San Andreas modulation at least by an order of magnitude owing to too much dilatancy. This may reflect a temperature dependence of the dilatancy and friction coefficients, both of which are expected to be zero at the brittle-ductile transition. The observed tidal modulation constrains the product of the friction and dilatancy coefficients to be at most 5 x 10-7 in the LFE source region, an order of magnitude smaller than observed at room temperature for talc. Alternatively, considering the predictions of a purely rate-dependent talc friction would

  10. Extended shadow test approach for constrained adaptive testing

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; Ariel, A.

    2002-01-01

    Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more

  11. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  12. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  13. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  14. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Constraining the noncommutative spectral action via astrophysical observations.

    Science.gov (United States)

    Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi

    2010-09-03

    The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.

  16. The balance of payment-constrained economic growth in Ethiopia ...

    African Journals Online (AJOL)

    The objective of this paper is to empirically test the validity of the simplified version of the balance of payment-constrained economic growth model for Ethiopia during the period 1971-20082. According to the model, economies only grow at a pace allowed by the constraints imposed by the requirement of balance of payment ...

  17. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  18. Theoretical calculation of reorganization energy for electron self-exchange reaction by constrained density functional theory and constrained equilibrium thermodynamics.

    Science.gov (United States)

    Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan

    2013-08-22

    Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.

  19. Constrained Local UniversE Simulations: a Local Group factory

    Science.gov (United States)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  20. Binary classification posed as a quadratically constrained quadratic ...

    Indian Academy of Sciences (India)

    Binary classification is posed as a quadratically constrained quadratic problem and solved using the proposed method. Each class in the binary classification problem is modeled as a multidimensional ellipsoid to forma quadratic constraint in the problem. Particle swarms help in determining the optimal hyperplane or ...

  1. Simulating the Range Expansion of Spartina alterniflora in Ecological Engineering through Constrained Cellular Automata Model and GIS

    Directory of Open Access Journals (Sweden)

    Zongsheng Zheng

    2015-01-01

    Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.

  2. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  3. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  4. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  5. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    International Nuclear Information System (INIS)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-01-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=−1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales

  6. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    Science.gov (United States)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

  7. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    International Nuclear Information System (INIS)

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-01

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model

  8. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  9. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  10. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40)

    Science.gov (United States)

    Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol

  11. Using Simulated Ground Motions to Constrain Near-Source Ground Motion Prediction Equations in Areas Experiencing Induced Seismicity

    Science.gov (United States)

    Bydlon, S. A.; Dunham, E. M.

    2016-12-01

    Recent increases in seismic activity in historically quiescent areas such as Oklahoma, Texas, and Arkansas, including large, potentially induced events such as the 2011 Mw 5.6 Prague, OK, earthquake, have spurred the need for investigation into expected ground motions associated with these seismic sources. The neoteric nature of this seismicity increase corresponds to a scarcity of ground motion recordings within 50 km of earthquakes Mw 3.0 and greater, with increasing scarcity at larger magnitudes. Gathering additional near-source ground motion data will help better constraints on regional ground motion prediction equations (GMPEs) and will happen over time, but this leaves open the possibility of damaging earthquakes occurring before potential ground shaking and seismic hazard in these areas are properly understood. To aid the effort of constraining near-source GMPEs associated with induced seismicity, we integrate synthetic ground motion data from simulated earthquakes into the process. Using the dynamic rupture and seismic wave propagation code waveqlab3d, we perform verification and validation exercises intended to establish confidence in simulated ground motions for use in constraining GMPEs. We verify the accuracy of our ground motion simulator by performing the PEER/SCEC layer-over-halfspace comparison problem LOH.1 Validation exercises to ensure that we are synthesizing realistic ground motion data include comparisons to recorded ground motions for specific earthquakes in target areas of Oklahoma between Mw 3.0 and 4.0. Using a 3D velocity structure that includes a 1D structure with additional small-scale heterogeneity, the properties of which are based on well-log data from Oklahoma, we perform ground motion simulations of small (Mw 3.0 - 4.0) earthquakes using point moment tensor sources. We use the resulting synthetic ground motion data to develop GMPEs for small earthquakes in Oklahoma. Preliminary results indicate that ground motions can be amplified

  12. Validation of a zero-dimensional model for prediction of NOx and engine performance for electronically controlled marine two-stroke diesel engines

    International Nuclear Information System (INIS)

    Scappin, Fabio; Stefansson, Sigurður H.; Haglind, Fredrik; Andreasen, Anders; Larsen, Ulrik

    2012-01-01

    The aim of this paper is to derive a methodology suitable for energy system analysis for predicting the performance and NO x emissions of marine low speed diesel engines. The paper describes a zero-dimensional model, evaluating the engine performance by means of an energy balance and a two zone combustion model using ideal gas law equations over a complete crank cycle. The combustion process is divided into intervals, and the product composition and flame temperature are calculated in each interval. The NO x emissions are predicted using the extended Zeldovich mechanism. The model is validated using experimental data from two MAN B and W engines; one case being data subject to engine parameter changes corresponding to simulating an electronically controlled engine; the second case providing data covering almost all model input and output parameters. The first case of validation suggests that the model can predict specific fuel oil consumption and NO x emissions within the 95% confidence intervals given by the experimental measurements. The second validation confirms the capability of the model to match measured engine output parameters based on measured engine input parameters with a maximum 5% deviation. - Highlights: ► A fast realistic model of a marine two-stroke low speed diesel engine was derived. ► The model is fast and accurate enough for future complex energy systems analysis. ► The effects of engine tuning were validated with experimental tests. ► The model was validated while constrained by experimental input and output data.

  13. CONSTRAINING HIGH-SPEED WINDS IN EXOPLANET ATMOSPHERES THROUGH OBSERVATIONS OF ANOMALOUS DOPPLER SHIFTS DURING TRANSIT

    International Nuclear Information System (INIS)

    Miller-Ricci Kempton, Eliza; Rauscher, Emily

    2012-01-01

    Three-dimensional (3D) dynamical models of hot Jupiter atmospheres predict very strong wind speeds. For tidally locked hot Jupiters, winds at high altitude in the planet's atmosphere advect heat from the day side to the cooler night side of the planet. Net wind speeds on the order of 1-10 km s –1 directed towards the night side of the planet are predicted at mbar pressures, which is the approximate pressure level probed by transmission spectroscopy. These winds should result in an observed blueshift of spectral lines in transmission on the order of the wind speed. Indeed, Snellen et al. recently observed a 2 ± 1 km s –1 blueshift of CO transmission features for HD 209458b, which has been interpreted as a detection of the day-to-night (substellar to anti-stellar) winds that have been predicted by 3D atmospheric dynamics modeling. Here, we present the results of a coupled 3D atmospheric dynamics and transmission spectrum model, which predicts the Doppler-shifted spectrum of a hot Jupiter during transit resulting from winds in the planet's atmosphere. We explore four different models for the hot Jupiter atmosphere using different prescriptions for atmospheric drag via interaction with planetary magnetic fields. We find that models with no magnetic drag produce net Doppler blueshifts in the transmission spectrum of ∼2 km s –1 and that lower Doppler shifts of ∼1 km s –1 are found for the higher drag cases, results consistent with—but not yet strongly constrained by—the Snellen et al. measurement. We additionally explore the possibility of recovering the average terminator wind speed as a function of altitude by measuring Doppler shifts of individual spectral lines and spatially resolving wind speeds across the leading and trailing terminators during ingress and egress.

  14. DOSE210, A Semi-empirical Model for Prediction of Organ Distribution and Radiation Doses from Long Term Exposure to 210Pb and 210Po

    International Nuclear Information System (INIS)

    Salmon, P.L.; Bondarenko, O.A.; Henshaw, D.L.

    1999-01-01

    The DOSE210 model is an internal dosimetric model for 210 Pb and 210 Po which is based on current ICRP generic models. It is constrained and validated by reference to up-to-date published biokinetic data for both nuclides. The model has been validated primarily in regard to the relation between levels of chronic lifetime intake and organ concentrations of 210 Pb and 210 Po. To this end some adjustments to current ICRP biokinetic parameters have been made. The most substantial changes have been made to bone surface biokinetics of 210 Pb and 210 Po to reflect recent experimental studies on the microdistribution and radioactive equilibrium of these nuclides in bone, as well as measurements in biopsied human red bone marrow. An important dosimetric prediction of DOSE210 is a substantially lower dose to skeletal tissues from internal 210 Pb and 210 Po than that predicted by the current ICRP model. The most significant dose component predicted from lifetime environmental exposure to 210 Pb and 210 Po is the alpha dose to liver and kidney in infancy. Recycling of historic intakes of 210 Pb in the adult, principally from bone, is calculated to account for 22% of 210 Pb present in the plasma. (author)

  15. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  16. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  17. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  18. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  19. Vacuum expectation values in a scalar constrained theory

    International Nuclear Information System (INIS)

    Alonso, F.; Julve, J.; Tiemblo, A.

    1985-01-01

    A class of finite Green functions in the context of a scalar constrained theory is studied. In a particular model the one-point GFs show that the vacuum expectation values for some fields vanish while one of them remains finite, a feature exhibited by the Goldstone and Higgs fields respectively. (orig.)

  20. HVM-TP: A Time Predictable, Portable Java Virtual Machine for Hard Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Luckow, Kasper Søe; Thomsen, Bent; Korsholm, Stephan Erbs

    2014-01-01

    We present HVMTIME; a portable and time predictable JVM implementation with applications in resource-constrained hard real-time embedded systems. In addition, it implements the Safety Critical Java (SCJ) Level 1 specification. Time predictability is achieved by a combination of time predictable...... algorithms, exploiting the programming model of the SCJ specification, and harnessing static knowledge of the hosted SCJ system. This paper presents HVMTIME in terms of its design and capabilities, and demonstrates how a complete timing model of the JVM represented as a Network of Timed Automata can...... be obtained using the tool TetaSARTSJVM. Further, using the timing model, we derive Worst Case Execution Times (WCETs) and Best Case Execution Times (BCETs) of the Java Bytecodes....

  1. Control of Warm Compression Stations Using Model Predictive Control: Simulation and Experimental Results

    Science.gov (United States)

    Bonne, F.; Alamir, M.; Bonnay, P.

    2017-02-01

    This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.

  2. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    Science.gov (United States)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-03-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2D seismic reflection data processing flow focused on pre - stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching (BHM), to estimate the uncertainties of the depths of key horizons near the borehole DSDP-258 located in the Mentelle Basin, south west of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ± 2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program (IODP), leg 369.

  3. Constraining the break of spatial diffeomorphism invariance with Planck data

    Energy Technology Data Exchange (ETDEWEB)

    Graef, L.L.; Benetti, M.; Alcaniz, J.S., E-mail: leilagraef@on.br, E-mail: micolbenetti@on.br, E-mail: alcaniz@on.br [Departamento de Astronomia, Observatório Nacional, R. Gen. José Cristino, 77—São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r , and the tensor spectral index, n {sub t} . If the inflation consistency relation is imposed, r =−8 n {sub t} , we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  4. Constraining the break of spatial diffeomorphism invariance with Planck data

    Science.gov (United States)

    Graef, L. L.; Benetti, M.; Alcaniz, J. S.

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r, and the tensor spectral index, nt. If the inflation consistency relation is imposed, r=-8 nt, we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  5. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  6. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  7. Optimization of an implicit constrained multi-physics system for motor wheels of electric vehicle

    International Nuclear Information System (INIS)

    Lei, Fei; Du, Bin; Liu, Xin; Xie, Xiaoping; Chai, Tian

    2016-01-01

    In this paper, implicit constrained multi-physics model of a motor wheel for an electric vehicle is built and then optimized. A novel optimization approach is proposed to solve the compliance problem between implicit constraints and stochastic global optimization. Firstly, multi-physics model of motor wheel is built from the theories of structural mechanics, electromagnetism and thermal physics. Then, implicit constraints are applied from the vehicle performances and magnetic characteristics. Implicit constrained optimization is carried out by a series of unconstrained optimization and verifications. In practice, sequentially updated subspaces are designed to completely substitute the original design space in local areas. In each subspace, a solution is obtained and is then verified by the implicit constraints. Optimal solutions which satisfy the implicit constraints are accepted as final candidates. The final global optimal solution is optimized from those candidates. Discussions are carried out to discover the differences between optimal solutions with unconstrained problem and different implicit constrained problems. Results show that the implicit constraints have significant influences on the optimal solution and the proposed approach is effective in finding the optimals. - Highlights: • An implicit constrained multi-physics model is built for sizing a motor wheel. • Vehicle dynamic performances are applied as implicit constraints for nonlinear system. • An efficient novel optimization is proposed to explore the constrained design space. • The motor wheel is optimized to achieve maximum efficiency on vehicle dynamics. • Influences of implicit constraints on vehicle performances are compared and analyzed.

  8. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  9. Constraining models of postglacial rebound using space geodesy: a detailed assessment of model ICE-5G (VM2) and its relatives

    Science.gov (United States)

    Argus, Donald F.; Peltier, W. Richard

    2010-05-01

    Using global positioning system, very long baseline interferometry, satellite laser ranging and Doppler Orbitography and Radiopositioning Integrated by Satellite observations, including the Canadian Base Network and Fennoscandian BIFROST array, we constrain, in models of postglacial rebound, the thickness of the ice sheets as a function of position and time and the viscosity of the mantle as a function of depth. We test model ICE-5G VM2 T90 Rot, which well fits many hundred Holocene relative sea level histories in North America, Europe and worldwide. ICE-5G is the deglaciation history having more ice in western Canada than ICE-4G; VM2 is the mantle viscosity profile having a mean upper mantle viscosity of 0.5 × 1021Pas and a mean uppermost-lower mantle viscosity of 1.6 × 1021Pas T90 is an elastic lithosphere thickness of 90 km; and Rot designates that the model includes (rotational feedback) Earth's response to the wander of the North Pole of Earth's spin axis towards Canada at a speed of ~1° Myr-1. The vertical observations in North America show that, relative to ICE-5G, the Laurentide ice sheet at last glacial maximum (LGM) at ~26 ka was (1) much thinner in southern Manitoba, (2) thinner near Yellowknife (Northwest Territories), (3) thicker in eastern and southern Quebec and (4) thicker along the northern British Columbia-Alberta border, or that ice was unloaded from these areas later (thicker) or earlier (thinner) than in ICE-5G. The data indicate that the western Laurentide ice sheet was intermediate in mass between ICE-5G and ICE-4G. The vertical observations and GRACE gravity data together suggest that the western Laurentide ice sheet was nearly as massive as that in ICE-5G but distributed more broadly across northwestern Canada. VM2 poorly fits the horizontal observations in North America, predicting places along the margins of the Laurentide ice sheet to be moving laterally away from the ice centre at 2 mm yr-1 in ICE-4G and 3 mm yr-1 in ICE-5G, in

  10. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  11. Noble gas encapsulation into carbon nanotubes: Predictions from analytical model and DFT studies

    Energy Technology Data Exchange (ETDEWEB)

    Balasubramani, Sree Ganesh; Singh, Devendra; Swathi, R. S., E-mail: swathi@iisertvm.ac.in [School of Chemistry, Indian Institute of Science Education and Research Thiruvananthapuram (IISER-TVM), Kerala 695016 (India)

    2014-11-14

    The energetics for the interaction of the noble gas atoms with the carbon nanotubes (CNTs) are investigated using an analytical model and density functional theory calculations. Encapsulation of the noble gas atoms, He, Ne, Ar, Kr, and Xe into CNTs of various chiralities is studied in detail using an analytical model, developed earlier by Hill and co-workers. The constrained motion of the noble gas atoms along the axes of the CNTs as well as the off-axis motion are discussed. Analyses of the forces, interaction energies, acceptance and suction energies for the encapsulation enable us to predict the optimal CNTs that can encapsulate each of the noble gas atoms. We find that CNTs of radii 2.98 − 4.20 Å (chiral indices, (5,4), (6,4), (9,1), (6,6), and (9,3)) can efficiently encapsulate the He, Ne, Ar, Kr, and Xe atoms, respectively. Endohedral adsorption of all the noble gas atoms is preferred over exohedral adsorption on various CNTs. The results obtained using the analytical model are subsequently compared with the calculations performed with the dispersion-including density functional theory at the M06 − 2X level using a triple-zeta basis set and good qualitative agreement is found. The analytical model is however found to be computationally cheap as the equations can be numerically programmed and the results obtained in comparatively very less time.

  12. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  13. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  14. Experiments with data assimilation in comprehensive air quality models: Impacts on model predictions and observation requirements (Invited)

    Science.gov (United States)

    Mathur, R.

    2009-12-01

    Emerging regional scale atmospheric simulation models must address the increasing complexity arising from new model applications that treat multi-pollutant interactions. Sophisticated air quality modeling systems are needed to develop effective abatement strategies that focus on simultaneously controlling multiple criteria pollutants as well as use in providing short term air quality forecasts. In recent years the applications of such models is continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physical and chemical atmospheric processes occurring at these disparate spatial and temporal scales requires the use of observation data beyond traditional in-situ networks so that the model simulations can be reasonably constrained. Preliminary applications of assimilation of remote sensing and aloft observations within a comprehensive regional scale atmospheric chemistry-transport modeling system will be presented: (1) A methodology is developed to assimilate MODIS aerosol optical depths in the model to represent the impacts long-range transport associated with the summer 2004 Alaskan fires on surface-level regional fine particulate matter (PM2.5) concentrations across the Eastern U.S. The episodic impact of this pollution transport event on PM2.5 concentrations over the eastern U.S. during mid-July 2004, is quantified through the complementary use of the model with remotely-sensed, aloft, and surface measurements; (2) Simple nudging experiments with limited aloft measurements are performed to identify uncertainties in model representations of physical processes and assess the potential use of such measurements in improving the predictive capability of atmospheric chemistry-transport models. The results from these early applications will be discussed in context of uncertainties in the model and in the remote sensing

  15. A two-phase model of plantar tissue: a step toward prediction of diabetic foot ulceration.

    Science.gov (United States)

    Sciumè, G; Boso, D P; Gray, W G; Cobelli, C; Schrefler, B A

    2014-11-01

    A new computational model, based on the thermodynamically constrained averaging theory, has been recently proposed to predict tumor initiation and proliferation. A similar mathematical approach is proposed here as an aid in diabetic ulcer prevention. The common aspects at the continuum level are the macroscopic balance equations governing the flow of the fluid phase, diffusion of chemical species, tissue mechanics, and some of the constitutive equations. The soft plantar tissue is modeled as a two-phase system: a solid phase consisting of the tissue cells and their extracellular matrix, and a fluid one (interstitial fluid and dissolved chemical species). The solid phase may become necrotic depending on the stress level and on the oxygen availability in the tissue. Actually, in diabetic patients, peripheral vascular disease impacts tissue necrosis; this is considered in the model via the introduction of an effective diffusion coefficient that governs transport of nutrients within the microvasculature. The governing equations of the mathematical model are discretized in space by the finite element method and in time domain using the θ-Wilson Method. While the full mathematical model is developed in this paper, the example is limited to the simulation of several gait cycles of a healthy foot. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  17. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  18. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  19. Exploring little Higgs models with ATLAS at the LHC

    International Nuclear Information System (INIS)

    Azuelos, G.; Benslama, K.; Costanzo, D.; Couture, G.; Garcia, J.E.; Hinchliffe, I.G.; Kanaya, N.; Lechowski, M.; Mehdiyev, R.; Polesello, G.; Ros, E.; Rousseau, D.

    2004-01-01

    We discuss possible searches for the new particles predicted by Little Higgs Models at the LHC. By using a simulation of the ATLAS detector, we demonstrate how the predicted quark, gauge bosons and additional Higgs bosons can be found and estimate the mass range over which their properties can be constrained

  20. Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population

    Science.gov (United States)

    Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.

    2012-01-01

    With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars

  1. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  2. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  3. Predicting Great Lakes fish yields: tools and constraints

    Science.gov (United States)

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  4. Hydrograph Predictions of Glacial Lake Outburst Floods From an Ice-Dammed Lake

    Science.gov (United States)

    McCoy, S. W.; Jacquet, J.; McGrath, D.; Koschitzki, R.; Okuinghttons, J.

    2017-12-01

    Understanding the time evolution of glacial lake outburst floods (GLOFs), and ultimately predicting peak discharge, is crucial to mitigating the impacts of GLOFs on downstream communities and understanding concomitant surface change. The dearth of in situ measurements taken during GLOFs has left many GLOF models currently in use untested. Here we present a dataset of 13 GLOFs from Lago Cachet Dos, Aysen Region, Chile in which we detail measurements of key environmental variables (total volume drained, lake temperature, and lake inflow rate) and high temporal resolution discharge measurements at the source lake, in addition to well-constrained ice thickness and bedrock topography. Using this dataset we test two common empirical equations as well as the physically-based model of Spring-Hutter-Clarke. We find that the commonly used empirical relationships based solely on a dataset of lake volume drained fail to predict the large variability in observed peak discharges from Lago Cachet Dos. This disagreement is likely because these equations do not consider additional environmental variables that we show also control peak discharge, primarily, lake water temperature and the rate of meltwater inflow to the source lake. We find that the Spring-Hutter-Clarke model can accurately simulate the exponentially rising hydrographs that are characteristic of ice-dammed GLOFs, as well as the order of magnitude variation in peak discharge between events if the hydraulic roughness parameter is allowed to be a free fitting parameter. However, the Spring-Hutter-Clarke model over predicts peak discharge in all cases by 10 to 35%. The systematic over prediction of peak discharge by the model is related to its abrupt flood termination that misses the observed steep falling limb of the flood hydrograph. Although satisfactory model fits are produced, the range in hydraulic roughness required to obtain these fits across all events was large, which suggests that current models do not

  5. Predictive Systems Toxicology

    KAUST Repository

    Kiani, Narsis A.; Shang, Ming-Mei; Zenil, Hector; Tegner, Jesper

    2018-01-01

    In this review we address to what extent computational techniques can augment our ability to predict toxicity. The first section provides a brief history of empirical observations on toxicity dating back to the dawn of Sumerian civilization. Interestingly, the concept of dose emerged very early on, leading up to the modern emphasis on kinetic properties, which in turn encodes the insight that toxicity is not solely a property of a compound but instead depends on the interaction with the host organism. The next logical step is the current conception of evaluating drugs from a personalized medicine point-of-view. We review recent work on integrating what could be referred to as classical pharmacokinetic analysis with emerging systems biology approaches incorporating multiple omics data. These systems approaches employ advanced statistical analytical data processing complemented with machine learning techniques and use both pharmacokinetic and omics data. We find that such integrated approaches not only provide improved predictions of toxicity but also enable mechanistic interpretations of the molecular mechanisms underpinning toxicity and drug resistance. We conclude the chapter by discussing some of the main challenges, such as how to balance the inherent tension between the predictive capacity of models, which in practice amounts to constraining the number of features in the models versus allowing for rich mechanistic interpretability, i.e. equipping models with numerous molecular features. This challenge also requires patient-specific predictions on toxicity, which in turn requires proper stratification of patients as regards how they respond, with or without adverse toxic effects. In summary, the transformation of the ancient concept of dose is currently successfully operationalized using rich integrative data encoded in patient-specific models.

  6. Predictive Systems Toxicology

    KAUST Repository

    Kiani, Narsis A.

    2018-01-15

    In this review we address to what extent computational techniques can augment our ability to predict toxicity. The first section provides a brief history of empirical observations on toxicity dating back to the dawn of Sumerian civilization. Interestingly, the concept of dose emerged very early on, leading up to the modern emphasis on kinetic properties, which in turn encodes the insight that toxicity is not solely a property of a compound but instead depends on the interaction with the host organism. The next logical step is the current conception of evaluating drugs from a personalized medicine point-of-view. We review recent work on integrating what could be referred to as classical pharmacokinetic analysis with emerging systems biology approaches incorporating multiple omics data. These systems approaches employ advanced statistical analytical data processing complemented with machine learning techniques and use both pharmacokinetic and omics data. We find that such integrated approaches not only provide improved predictions of toxicity but also enable mechanistic interpretations of the molecular mechanisms underpinning toxicity and drug resistance. We conclude the chapter by discussing some of the main challenges, such as how to balance the inherent tension between the predictive capacity of models, which in practice amounts to constraining the number of features in the models versus allowing for rich mechanistic interpretability, i.e. equipping models with numerous molecular features. This challenge also requires patient-specific predictions on toxicity, which in turn requires proper stratification of patients as regards how they respond, with or without adverse toxic effects. In summary, the transformation of the ancient concept of dose is currently successfully operationalized using rich integrative data encoded in patient-specific models.

  7. Predictability of twentieth century sea-level rise from past data

    International Nuclear Information System (INIS)

    Bittermann, Klaus; Rahmstorf, Stefan; Perrette, Mahé; Vermeer, Martin

    2013-01-01

    The prediction of global sea-level rise is one of the major challenges of climate science. While process-based models are still being improved to capture the complexity of the processes involved, semi-empirical models, exploiting the observed connection between global-mean sea level and global temperature and calibrated with data, have been developed as a complementary approach. Here we investigate whether twentieth century sea-level rise could have been predicted with such models given a knowledge of twentieth century global temperature increase. We find that either proxy or early tide gauge data do not hold enough information to constrain the model parameters well. However, in combination, the use of proxy and tide gauge sea-level data up to 1900 AD allows a good prediction of twentieth century sea-level rise, despite this rise being well outside the rates experienced in previous centuries during the calibration period of the model. The 90% confidence range for the linear twentieth century rise predicted by the semi-empirical model is 13–30 cm, whereas the observed interval (using two tide gauge data sets) is 14–26 cm. (letter)

  8. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  9. Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Ricardo L.S. [Universidade Federal de Santa Maria, Departamento de Fisica, Santa Maria, RS (Brazil); Kent State University, Physics Department, Kent, OH (United States); Timoteo, Varese S. [Universidade Estadual de Campinas (UNICAMP), Grupo de Optica e Modelagem Numerica (GOMNI), Faculdade de Tecnologia, Limeira, SP (Brazil); Avancini, Sidney S.; Pinto, Marcus B. [Universidade Federal de Santa Catarina, Departamento de Fisica, Florianopolis, Santa Catarina (Brazil); Krein, Gastao [Universidade Estadual Paulista, Instituto de Fisica Teorica, Sao Paulo, SP (Brazil)

    2017-05-15

    The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B, T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with G(B, T) are compared with the ones obtained at constant coupling, G. The model with G(B, T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic-field-dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the coupling constant can be easily implemented to improve typical model applications to magnetized quark matter. (orig.)

  10. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  11. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  12. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  13. Predicted Extension of the Sagittarius Stream to the Milky Way Virial Radius

    Energy Technology Data Exchange (ETDEWEB)

    Dierickx, Marion I. P.; Loeb, Abraham, E-mail: mdierickx@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu [Astronomy Department, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2017-02-10

    The extensive span of the Sagittarius (Sgr) stream makes it a promising tool for studying the gravitational potential of the Milky Way (MW). Characterizing its stellar kinematics can constrain halo properties and provide a benchmark for the paradigm of galaxy formation from cold dark matter. Accurate models of the disruption dynamics of the Sgr progenitor are necessary to employ this tool. Using a combination of analytic modeling and N -body simulations, we build a new model of the Sgr orbit and resulting stellar stream. In contrast to previous models, we simulate the full infall trajectory of the Sgr progenitor from the time it first crossed the MW virial radius 8 Gyr ago. An exploration of the parameter space of initial phase-space conditions yields tight constraints on the angular momentum of the Sgr progenitor. Our best-fit model is the first to accurately reproduce existing data on the 3D positions and radial velocities of the debris detected 100 kpc away in the MW halo. In addition to replicating the mapped stream, the simulation also predicts the existence of several arms of the Sgr stream extending to hundreds of kiloparsecs. The two most distant stars known in the MW halo coincide with the predicted structure. Additional stars in the newly predicted arms can be found with future data from the Large Synoptic Survey Telescope. Detecting a statistical sample of stars in the most distant Sgr arms would provide an opportunity to constrain the MW potential out to unprecedented Galactocentric radii.

  14. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  16. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  17. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  18. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  19. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  20. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  1. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2016-01-01

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  2. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.

    2016-01-06

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  3. Does Aspartic Acid Racemization Constrain the Depth Limit of the Subsurface Biosphere?

    Science.gov (United States)

    Onstott, T C.; Magnabosco, C.; Aubrey, A. D.; Burton, A. S.; Dworkin, J. P.; Elsila, J. E.; Grunsfeld, S.; Cao, B. H.; Hein, J. E.; Glavin, D. P.; hide

    2013-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of approximately 89 years for 1 km depth and 27 C and 1-2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  4. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  5. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  6. Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.

    Science.gov (United States)

    Starcevic, Vladan

    2017-06-01

    Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.

  7. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  8. Nonmonotonic Skeptical Consequence Relation in Constrained Default Logic

    Directory of Open Access Journals (Sweden)

    Mihaiela Lupea

    2010-12-01

    Full Text Available This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution. 

  9. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  10. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  11. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  12. An ordinal model of the McGurk illusion

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... model it also employed 30 free parameters where the ordinal model needed only 14. Testing the predictive power of the models using a form of cross-validation we found that, although both models performed rather poorly, the ordinal model performed better than the FLMP. Based on these findings we suggest...... that ordinal models generally have greater predictive power because they are constrained by a priori information about the adjacency of phonetic categories....

  13. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  14. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  15. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  16. SCADA system with predictive controller applied to irrigation canals

    OpenAIRE

    Figueiredo, João; Botto, Miguel; Rijo, Manuel

    2013-01-01

    This paper applies a model predictive controller (MPC) to an automatic water canal with sensors and actuators controlled by a network (programmable logic controller), and supervised by a SCADA system (supervisory control and a data acquisition). This canal is composed by a set of distributed sub-systems that control the water level in each canal pool, constrained by discharge gates (control variables) and water off-takes (disturbances). All local controllers are available through an industria...

  17. Applications of a constrained mechanics methodology in economics

    International Nuclear Information System (INIS)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  18. Applications of a constrained mechanics methodology in economics

    Science.gov (United States)

    Janová, Jitka

    2011-11-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  19. Applications of a constrained mechanics methodology in economics

    Energy Technology Data Exchange (ETDEWEB)

    Janova, Jitka, E-mail: janova@mendelu.cz [Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemedelska 1, 613 00 Brno (Czech Republic)

    2011-11-15

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  20. Chance-constrained optimization of demand response to price signals

    DEFF Research Database (Denmark)

    Dorini, Gianluca Fabio; Pinson, Pierre; Madsen, Henrik

    2013-01-01

    within a recursive least squares (RLS) framework using data measurable at the grid level, in an adaptive fashion. Optimal price signals are generated by embedding the FIR models within a chance-constrained optimization framework. The objective is to keep the price signal as unchanged as possible from...

  1. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  3. Splines and polynomial tools for flatness-based constrained motion planning

    Science.gov (United States)

    Suryawan, Fajar; De Doná, José; Seron, María

    2012-08-01

    This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.

  4. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  5. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites

    Directory of Open Access Journals (Sweden)

    F. M. San Martini

    2006-01-01

    Full Text Available A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately predict the observed inorganic particle concentrations at all three sites. The agreement between the predicted and observed gas phase ammonia concentration is excellent. The NOz concentration calculated from the NOy, NO and NO2 observations is of limited use in constraining the gas phase nitric acid concentration given the large uncertainties in this measure of nitric acid and additional reactive nitrogen species. Focusing on the acidic period of 9–11 April identified by Salcedo et al. (2006, the model accurately predicts the particle phase observations during this period with the exception of the nitrate predictions after 10:00 a.m. (Central Daylight Time, CDT on 9 April, where the model underpredicts the observations by, on average, 20%. This period had a low planetary boundary layer, very high particle concentrations, and higher than expected nitrogen dioxide concentrations. For periods when the particle chloride observations are consistently above the detection limit, the model is able to both accurately predict the particle chloride mass concentrations and provide well-constrained HCl (g concentrations. The availability of gas-phase ammonia observations helps constrain the predicted HCl (g concentrations. When the particles are aqueous, the most likely concentrations of HCl (g are in the sub-ppbv range. The most likely predicted concentration of HCl (g was found to reach concentrations of order 10 ppbv if the particles are dry. Finally, the

  6. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  7. Venus Surface Composition Constrained by Observation and Experiment

    Science.gov (United States)

    Gilmore, Martha; Treiman, Allan; Helbert, Jörn; Smrekar, Suzanne

    2017-11-01

    New observations from the Venus Express spacecraft as well as theoretical and experimental investigation of Venus analogue materials have advanced our understanding of the petrology of Venus melts and the mineralogy of rocks on the surface. The VIRTIS instrument aboard Venus Express provided a map of the southern hemisphere of Venus at ˜1 μm allowing, for the first time, the definition of surface units in terms of their 1 μm emissivity and derived mineralogy. Tessera terrain has lower emissivity than the presumably basaltic plains, consistent with a more silica-rich or felsic mineralogy. Thermodynamic modeling and experimental production of melts with Venera and Vega starting compositions predict derivative melts that range from mafic to felsic. Large volumes of felsic melts require water and may link the formation of tesserae to the presence of a Venus ocean. Low emissivity rocks may also be produced by atmosphere-surface weathering reactions unlike those seen presently. High 1 μm emissivity values correlate to stratigraphically recent flows and have been used with theoretical and experimental predictions of basalt weathering to identify regions of recent volcanism. The timescale of this volcanism is currently constrained by the weathering of magnetite (higher emissivity) in fresh basalts to hematite (lower emissivity) in Venus' oxidizing environment. Recent volcanism is corroborated by transient thermal anomalies identified by the VMC instrument aboard Venus Express. The interpretation of all emissivity data depends critically on understanding the composition of surface materials, kinetics of rock weathering and their measurement under Venus conditions. Extended theoretical studies, continued analysis of earlier spacecraft results, new atmospheric data, and measurements of mineral stability under Venus conditions have improved our understanding atmosphere-surface interactions. The calcite-wollastonite CO2 buffer has been discounted due, among other things, to

  8. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  9. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  10. Constraining climate sensitivity and continental versus seafloor weathering using an inverse geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Catling, David C

    2017-05-22

    The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is  K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.

  11. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  12. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  13. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  15. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  16. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  17. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  18. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  19. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  20. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  1. Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2

    Science.gov (United States)

    Ni, Dongdong

    2018-05-01

    Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size

  2. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  3. Training signaling pathway maps to biochemical data with constrained fuzzy logic: quantitative analysis of liver cell responses to inflammatory stimuli.

    Directory of Open Access Journals (Sweden)

    Melody K Morris

    2011-03-01

    Full Text Available Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL, converts a prior knowledge network (obtained from literature or interactome databases into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a generating experimentally testable biological hypotheses concerning pathway crosstalk, (b establishing capability for quantitative prediction of protein activity, and (c prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone.

  4. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  5. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    Science.gov (United States)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  6. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  7. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  8. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  9. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  10. Use of integrated analogue and numerical modelling to predict tridimensional fracture intensity in fault-related-folds.

    Science.gov (United States)

    Pizzati, Mattia; Cavozzi, Cristian; Magistroni, Corrado; Storti, Fabrizio

    2016-04-01

    Fracture density pattern predictions with low uncertainty is a fundamental issue for constraining fluid flow pathways in thrust-related anticlines in the frontal parts of thrust-and-fold belts and accretionary prisms, which can also provide plays for hydrocarbon exploration and development. Among the drivers that concur to determine the distribution of fractures in fold-and-thrust-belts, the complex kinematic pathways of folded structures play a key role. In areas with scarce and not reliable underground information, analogue modelling can provide effective support for developing and validating reliable hypotheses on structural architectures and their evolution. In this contribution, we propose a working method that combines analogue and numerical modelling. We deformed a sand-silicone multilayer to eventually produce a non-cylindrical thrust-related anticline at the wedge toe, which was our test geological structure at the reservoir scale. We cut 60 serial cross-sections through the central part of the deformed model to analyze faults and folds geometry using dedicated software (3D Move). The cross-sections were also used to reconstruct the 3D geometry of reference surfaces that compose the mechanical stratigraphy thanks to the use of the software GoCad. From the 3D model of the experimental anticline, by using 3D Move it was possible to calculate the cumulative stress and strain underwent by the deformed reference layers at the end of the deformation and also in incremental steps of fold growth. Based on these model outputs it was also possible to predict the orientation of three main fractures sets (joints and conjugate shear fractures) and their occurrence and density on model surfaces. The next step was the upscaling of the fracture network to the entire digital model volume, to create DFNs.

  11. Low-lying excited states by constrained DFT

    Science.gov (United States)

    Ramos, Pablo; Pavanello, Michele

    2018-04-01

    Exploiting the machinery of Constrained Density Functional Theory (CDFT), we propose a variational method for calculating low-lying excited states of molecular systems. We dub this method eXcited CDFT (XCDFT). Excited states are obtained by self-consistently constraining a user-defined population of electrons, Nc, in the virtual space of a reference set of occupied orbitals. By imposing this population to be Nc = 1.0, we computed the first excited state of 15 molecules from a test set. Our results show that XCDFT achieves an accuracy in the predicted excitation energy only slightly worse than linear-response time-dependent DFT (TDDFT), but without incurring into problems of variational collapse typical of the more commonly adopted ΔSCF method. In addition, we selected a few challenging processes to test the limits of applicability of XCDFT. We find that in contrast to TDDFT, XCDFT is capable of reproducing energy surfaces featuring conical intersections (azobenzene and H3) with correct topology and correct overall energetics also away from the intersection. Venturing to condensed-phase systems, XCDFT reproduces the TDDFT solvatochromic shift of benzaldehyde when it is embedded by a cluster of water molecules. Thus, we find XCDFT to be a competitive method among single-reference methods for computations of excited states in terms of time to solution, rate of convergence, and accuracy of the result.

  12. On the relationship between aerosol model uncertainty and radiative forcing uncertainty.

    Science.gov (United States)

    Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S

    2016-05-24

    The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.

  13. Multiscale Modeling of Advanced Materials for Damage Prediction and Structural Health Monitoring

    Science.gov (United States)

    2015-05-01

    Abaqus /Standard ( Abaqus , 2009). An FEM model was chosen in this case because semi-analytical methods can have inherent ambiguity when modeling...conditions were enforced using the technique described in Danielsson, Parks, and Boyce (2002) and implemented within Abaqus through the use of linear...linear constraint equations in the Abaqus input file constrains the degrees of freedom of the boundary 66 nodes residing on opposite sides of the

  14. T2K Replica Target Hadron Production Measurements in NA61/SHINE and T2K Neutrino Flux Predictions

    CERN Document Server

    AUTHOR|(SzGeCERN)710687

    Accelerator based neutrino experiments generate their neutrino beams by impinging high energy protons on thick targets. The neutrino beam predictions are thus based on modeling the interactions of the beam protons inside the targets. Different hadronic models can be used with different accuracies depending on the energy range of the incident protons and on the target material. Nevertheless, none of the models can be seen as perfectly describing all different interactions. In order to reach high precision neutrino flux predictions, it is thus mandatory to be able to test and constrain the models with hadron production measurements. The T2K experiment in Japan uses the ancillary NA61/SHINE facility at CERN to constrain the production of hadrons resulting from the interactions of proton beam particles impinging on a 90cm long graphite target. Data taken by NA61/SHINE with a 30 GeV proton beam on a thin (4% interaction length) graphite target have been recorded in 2007 and 2009. They have been analysed and extens...

  15. Structural model of the Northern Latium volcanic area constrained by MT, gravity and aeromagnetic data

    Directory of Open Access Journals (Sweden)

    P. Gasparini

    1997-06-01

    Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.

  16. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  17. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2017-06-01

    Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1

  18. The Balance of Payment-Constrained Economic Growth in Ethiopia ...

    African Journals Online (AJOL)

    Administrator

    Page 100 financial liberalization and export promotion strategy necessarily lead to better growth performance. Rather, one should consider not only exports of goods and services, but also the income elasticity of imports. The balance of payments-constrained growth model postulates that the rate of growth in any country is ...

  19. Constrained relationship agency as the risk factor for intimate ...

    African Journals Online (AJOL)

    We used structural equation modelling to identify and measure constrained relationship agency (CRA) as a latent variable, and then tested the hypothesis that CRA plays a significant role in the pathway between IPV and transactional sex. After controlling for CRA, receiving more material goods from a sexual partner was ...

  20. Capturing Hotspots For Constrained Indoor Movement

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph......-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the hotspots extraction technique from the mapping records....

  1. Constraining the evolution of the CMB temperature with SZ measurements from Planck data

    Energy Technology Data Exchange (ETDEWEB)

    Luzzi, G.; Petris, M. De; Lamagna, L. [Dept. of Physics, Sapienza, University of Rome, Piazzale Aldo Moro 2, Rome, I-00185 Italy (Italy); Génova-Santos, R.T. [Instituto de Astrofísica de Canarias, C/Vía Láctea s/n, La Laguna, Tenerife (Spain); Martins, C.J.A.P., E-mail: gemma.luzzi@roma1.infn.it, E-mail: rgs@iac.es, E-mail: carlos.martins@astro.up.pt, E-mail: marco.depetris@roma1.infn.it, E-mail: luca.lamagna@roma1.infn.it [Centro de Astrofísica, Universidade do Porto, Rua das Estrelas, Porto, 4150-762 Portugal (Portugal)

    2015-09-01

    The CMB temperature-redshift relation, T{sub CMB}(z)=T{sub 0}(1+z), is a key prediction of the standard cosmology but is violated in many non-standard models. Constraining possible deviations from this law is an effective way to test the ΛCDM paradigm and to search for hints of new physics. We have determined T{sub CMB}(z), with a precision up to 3%, for a subsample (103 clusters) of the Planck SZ cluster catalog, at redshifts in the range 0.01–0.94, using measurements of the spectrum of the Sunyaev-Zel'dovich (SZ) effect obtained from Planck temperature maps at frequencies from 70 to 353 GHz. The method adopted to provide individual determinations of T{sub CMB}(z) at cluster redshift relies on the use of SZ intensity change, Δ I{sub SZ}(ν) at different frequencies and on a Monte Carlo Markov chain approach. By applying this method to the sample of 103 clusters, we limit possible deviations of the form T{sub CMB}(z)=T{sub 0}(1+z){sup 1−β} to be β= 0.012 ± 0.016, at 1σ uncertainty, consistent with the prediction of the standard model. Combining these measurements with previously published results, we get β=0.013±0.011.

  2. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  3. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  4. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  5. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  6. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  7. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    Science.gov (United States)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-06-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2-D seismic reflection data processing flow focused on pre-stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching, to estimate the uncertainties of the depths of key horizons near the Deep Sea Drilling Project (DSDP) borehole 258 (DSDP-258) located in the Mentelle Basin, southwest of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ±2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent to the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program, leg 369.

  8. Vibroacoustic study of a point-constrained plate mounted in a duct

    Science.gov (United States)

    Sapkale, Swapnil L.; Sucheendran, Mahesh M.; Gupta, Shakti S.; Kanade, Shantanu V.

    2018-04-01

    The vibroacoustic study of the interaction of sound with a point-constrained, simply-supported square plate is considered in this paper. The plate is mounted flush on one of the walls of an infinite duct of rectangular cross section and is backed by a cavity. The plate response and the acoustic field is predicted by solving the coupled governing equations using modal expansion with the relevant eigenmodes of the plate dynamics and acoustic fields in the duct and cavity. By varying the location of the point constraint, the frequency characteristics of the transmission loss in the duct can be tuned. The point constraint can also alter the amplitude and spectral characteristics of the plate's response. Interestingly, some new peaks are observed in the response because of the excitation of unsymmetric modes which are otherwise dormant. Mode-localization phenomenon, which is the localization of vibration in specific regions of the plate, is observed for selected constrained points.

  9. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  10. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  11. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  12. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  13. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  14. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  15. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  16. Explaining evolution via constrained persistent perfect phylogeny

    Science.gov (United States)

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to

  17. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  18. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  19. Nucleation in stress-induced tetragonal-monoclinic transformation of constrained zirconia

    International Nuclear Information System (INIS)

    Chan, S.K.

    1992-08-01

    A theory for stress-induced tetragonal→monoclinic transformation of constrained zirconia is presented based on the assumption that when forcibly strained to a regime of absolute instability where the free energy density of the tetragonal phase has a negative curvature, the constrained tetragonal zirconia becomes unstable with respect to the development of a modulated strain pattern that will evolve into a band of twin monoclinic domains. The temperature range for such an instability, the critical size of the inclusion, the corresponding critical strain, and the periodicity of the modulation are derived in terms of parameters that can be related to the elastic stiffness coefficients of various orders of the inclusion and the shear modulus of the host matrix. An entirely different mechanism is suggested for the reverse monoclinic→tetragonal transformation because the monoclinic phase is metastable when the extrinsic stress is removed. Estimates for the parameters are inferred from a variety of experimental data for pure zirconia and the numerical values for the predicted physical quantities are obtained

  20. USING ForeCAT DEFLECTIONS AND ROTATIONS TO CONSTRAIN THE EARLY EVOLUTION OF CMEs

    International Nuclear Information System (INIS)

    Kay, C.; Opher, M.; Colaninno, R. C.; Vourlidas, A.

    2016-01-01

    To accurately predict the space weather effects of the impacts of coronal mass ejection (CME) at Earth one must know if and when a CME will impact Earth and the CME parameters upon impact. In 2015 Kay et al. presented Forecasting a CME’s Altered Trajectory (ForeCAT), a model for CME deflections based on the magnetic forces from the background solar magnetic field. Knowing the deflection and rotation of a CME enables prediction of Earth impacts and the orientation of the CME upon impact. We first reconstruct the positions of the 2010 April 8 and the 2012 July 12 CMEs from the observations. The first of these CMEs exhibits significant deflection and rotation (34° deflection and 58° rotation), while the second shows almost no deflection or rotation (<3° each). Using ForeCAT, we explore a range of initial parameters, such as the CME’s location and size, and find parameters that can successfully reproduce the behavior for each CME. Additionally, since the deflection depends strongly on the behavior of a CME in the low corona, we are able to constrain the expansion and propagation of these CMEs in the low corona.

  1. Constrained CPn models

    International Nuclear Information System (INIS)

    Latorre, J.I.; Luetken, C.A.

    1988-11-01

    We construct a large new class of two dimensional sigma models with Kaehler target spaces which are algebraic manifolds realized as complete interactions in weighted CP n spaces. They are N=2 superconformally symmetric and particular choices of constraints give Calabi-Yau target spaces which are nontrivial string vacua. (orig.)

  2. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  3. Incorporating a Constrained Optimization Algorithm into Remote- Sensing/Precision Agriculture Methodology

    Science.gov (United States)

    Morgenthaler, George; Khatib, Nader; Kim, Byoungsoo

    with information to improve their crop's vigor has been a major topic of interest. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, the efficiency of farming must increase to meet future food requirements and to make farming a sustainable occupation for the farmer. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The goal is to increase farm revenue by increasing crop yield and decreasing applications of costly chemical and water treatments. In addition, this methodology will decrease the environmental costs of farming, i.e., reduce air, soil, and water pollution. Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now available. Commercial satellite systems can image (multi-spectral) the Earth with a resolution of approximately 2.5 m. Variable precision dispensing systems using GPS are available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been formulated. Personal computers and internet access are in place in most farm homes and can provide a mechanism to periodically disseminate, e.g. bi-weekly, advice on what quantities of water and chemicals are needed in individual regions of the field. What is missing is a model that fuses the disparate sources of information on the current states of the crop and soil, and the remaining resource levels available with the decisions farmers are required to make. This must be a product that is easy for the farmer to understand and to implement. A "Constrained Optimization Feed-back Control Model" to fill this void will be presented. The objective function of the model will be used to maximize the farmer's profit by increasing yields while decreasing environmental costs and decreasing

  4. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    International Nuclear Information System (INIS)

    Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen

    2015-01-01

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  5. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jing-Cheng [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Huang, Guohe, E-mail: huang@iseis.org [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Huang, Yuefei [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Zhang, Hua [College of Science and Engineering, Texas A& M University — Corpus Christi, Corpus Christi, TX 78412-5797 (United States); Li, Zhong [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Chen, Qiuwen [Center for Eco-Environmental Research, Nanjing Hydraulics Research Institute, Nanjing 210029 (China)

    2015-08-15

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  6. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  7. Surface complexation modeling for predicting solid phase arsenic concentrations in the sediments of the Mississippi River Valley alluvial aquifer, Arkansas, USA

    Science.gov (United States)

    Sharif, M.S.U.; Davis, R.K.; Steele, K.F.; Kim, B.; Hays, P.D.; Kresse, T.M.; Fazio, J.A.

    2011-01-01

    The potential health impact of As in drinking water supply systems in the Mississippi River Valley alluvial aquifer in the state of Arkansas, USA is significant. In this context it is important to understand the occurrence, distribution and mobilization of As in the Mississippi River Valley alluvial aquifer. Application of surface complexation models (SCMs) to predict the sorption behavior of As and hydrous Fe oxides (HFO) in the laboratory has increased in the last decade. However, the application of SCMs to predict the sorption of As in natural sediments has not often been reported, and such applications are greatly constrained by the lack of site-specific model parameters. Attempts have been made to use SCMs considering a component additivity (CA) approach which accounts for relative abundances of pure phases in natural sediments, followed by the addition of SCM parameters individually for each phase. Although few reliable and internally consistent sorption databases related to HFO exist, the use of SCMs using laboratory-derived sorption databases to predict the mobility of As in natural sediments has increased. This study is an attempt to evaluate the ability of the SCMs using the geochemical code PHREEQC to predict solid phase As in the sediments of the Mississippi River Valley alluvial aquifer in Arkansas. The SCM option of the double-layer model (DLM) was simulated using ferrihydrite and goethite as sorbents quantified from chemical extractions, calculated surface-site densities, published surface properties, and published laboratory-derived sorption constants for the sorbents. The model results are satisfactory for shallow wells (10.6. m below ground surface), where the redox condition is relatively oxic or mildly suboxic. However, for the deep alluvial aquifer (21-36.6. m below ground surface) where the redox condition is suboxic to anoxic, the model results are unsatisfactory. ?? 2011 Elsevier Ltd.

  8. Quasicanonical structure of optimal control in constrained discrete systems

    Science.gov (United States)

    Sieniutycz, S.

    2003-06-01

    This paper considers discrete processes governed by difference rather than differential equations for the state transformation. The basic question asked is if and when Hamiltonian canonical structures are possible in optimal discrete systems. Considering constrained discrete control, general optimization algorithms are derived that constitute suitable theoretical and computational tools when evaluating extremum properties of constrained physical models. The mathematical basis of the general theory is the Bellman method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage criterion which allows a variation of the terminal state that is otherwise fixed in the Bellman's method. Two relatively unknown, powerful optimization algorithms are obtained: an unconventional discrete formalism of optimization based on a Hamiltonian for multistage systems with unconstrained intervals of holdup time, and the time interval constrained extension of the formalism. These results are general; namely, one arrives at: the discrete canonical Hamilton equations, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory along with all basic results of variational calculus. Vast spectrum of applications of the theory is briefly discussed.

  9. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  10. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  11. Snow contribution to springtime atmospheric predictability over the second half of the twentieth century

    Energy Technology Data Exchange (ETDEWEB)

    Peings, Yannick [CNRM-GAME, Meteo-France et CNRS, Toulouse (France); CNRM/GMGEC/VDR, Toulouse (France); Douville, H.; Alkama, R.; Decharme, B. [CNRM-GAME, Meteo-France et CNRS, Toulouse (France)

    2011-09-15

    A set of global atmospheric simulations has been performed with the ARPEGE-Climat model in order to quantify the contribution of realistic snow conditions to seasonal atmospheric predictability in addition to that of a perfect sea surface temperature (SST) forcing. The focus is on the springtime boreal hemisphere where the combination of a significant snow cover variability and an increasing solar radiation favour the potential snow influence on the surface energy budget. The study covers the whole 1950-2000 period through the use of an original snow mass reanalysis based on an off-line land surface model and possibly constrained by satellite snow cover observations. Two ensembles of 10-member AMIP-type experiments have been first performed with relaxed versus free snow boundary conditions. The nudging towards the monthly snow mass reanalysis significantly improves both potential and actual predictability of springtime surface air temperature over Central Europe and North America. Yet, the impact is confined to the lower troposphere and there is no clear improvement in the predictability of the large-scale atmospheric circulation. Further constraining the prescribed snow boundary conditions with satellite observations does not change much the results. Finally, using the snow reanalysis only for initializing the model on March 1st also leads to a positive impact on predicted low-level temperatures but with a weaker amplitude and persistence. A conditional skill approach as well as some selected case studies provide some guidelines for interpreting these results and suggest that an underestimated snow cover variability and a misrepresentation of ENSO teleconnections may hamper the benefit of an improved snow initialization in the ARPEGE-Climat model. (orig.)

  12. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  13. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  14. Using adaptive model predictive control to customize maintenance therapy chemotherapeutic dosing for childhood acute lymphoblastic leukemia.

    Science.gov (United States)

    Noble, Sarah L; Sherer, Eric; Hannemann, Robert E; Ramkrishna, Doraiswami; Vik, Terry; Rundell, Ann E

    2010-06-07

    Acute lymphoblastic leukemia (ALL) is a common childhood cancer in which nearly one-quarter of patients experience a disease relapse. However, it has been shown that individualizing therapy for childhood ALL patients by adjusting doses based on the blood concentration of active drug metabolite could significantly improve treatment outcome. An adaptive model predictive control (MPC) strategy is presented in which maintenance therapy for childhood ALL is personalized using routine patient measurements of red blood cell mean corpuscular volume as a surrogate for the active drug metabolite concentration. A clinically relevant mathematical model is developed and used to describe the patient response to the chemotherapeutic drug 6-mercaptopurine, with some model parameters being patient-specific. During the course of treatment, the patient-specific parameters are adaptively identified using recurrent complete blood count measurements, which sufficiently constrain the patient parameter uncertainty to support customized adjustments of the drug dose. While this work represents only a first step toward a quantitative tool for clinical use, the simulated treatment results indicate that the proposed mathematical model and adaptive MPC approach could serve as valuable resources to the oncologist toward creating a personalized treatment strategy that is both safe and effective. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  16. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  17. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  18. Chance Constrained Input Relaxation to Congestion in Stochastic DEA. An Application to Iranian Hospitals.

    Science.gov (United States)

    Kheirollahi, Hooshang; Matin, Behzad Karami; Mahboubi, Mohammad; Alavijeh, Mehdi Mirzaei

    2015-01-01

    This article developed an approached model of congestion, based on relaxed combination of inputs, in stochastic data envelopment analysis (SDEA) with chance constrained programming approaches. Classic data envelopment analysis models with deterministic data have been used by many authors to identify congestion and estimate its levels; however, data envelopment analysis with stochastic data were rarely used to identify congestion. This article used chance constrained programming approaches to replace stochastic models with "deterministic equivalents". This substitution leads us to non-linear problems that should be solved. Finally, the proposed method based on relaxed combination of inputs was used to identify congestion input in six Iranian hospital with one input and two outputs in the period of 2009 to 2012.

  19. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  20. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  1. A Stochastic Multi-Objective Chance-Constrained Programming Model for Water Supply Management in Xiaoqing River Watershed

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2017-05-01

    Full Text Available In this paper, a stochastic multi-objective chance-constrained programming model (SMOCCP was developed for tackling the water supply management problem. Two objectives were included in this model, which are the minimization of leakage loss amounts and total system cost, respectively. The traditional SCCP model required the random variables to be expressed in the normal distributions, although their statistical characteristics were suitably reflected by other forms. The SMOCCP model allows the random variables to be expressed in log-normal distributions, rather than general normal form. Possible solution deviation caused by irrational parameter assumption was avoided and the feasibility and accuracy of generated solutions were ensured. The water supply system in the Xiaoqing River watershed was used as a study case for demonstration. Under the context of various weight combinations and probabilistic levels, many types of solutions are obtained, which are expressed as a series of transferred amounts from water sources to treated plants, from treated plants to reservoirs, as well as from reservoirs to tributaries. It is concluded that the SMOCCP model could reflect the sketch of the studied region and generate desired water supply schemes under complex uncertainties. The successful application of the proposed model is expected to be a good example for water resource management in other watersheds.

  2. QUASARS ARE NOT LIGHT BULBS: TESTING MODELS OF QUASAR LIFETIMES WITH THE OBSERVED EDDINGTON RATIO DISTRIBUTION

    International Nuclear Information System (INIS)

    Hopkins, Philip F.; Hernquist, Lars

    2009-01-01

    We use the observed distribution of Eddington ratios as a function of supermassive black hole (BH) mass to constrain models of quasar/active galactic nucleus (AGN) lifetimes and light curves. Given the observed (well constrained) AGN luminosity function, a particular model for AGN light curves L(t) or, equivalently, the distribution of AGN lifetimes (time above a given luminosity t(>L)) translates directly and uniquely (without further assumptions) to a predicted distribution of Eddington ratios at each BH mass. Models for self-regulated BH growth, in which feedback produces a self-regulating 'decay' or 'blowout' phase after the AGN reaches some peak luminosity/BH mass and begins to expel gas and shut down accretion, make specific predictions for the light curves/lifetimes, distinct from, e.g., the expected distribution if AGN simply shut down by gas starvation (without feedback) and very different from the prediction of simple phenomenological 'light bulb' scenarios. We show that the present observations of the Eddington ratio distribution, spanning nearly 5 orders of magnitude in Eddington ratio, 3 orders of magnitude in BH mass, and redshifts z = 0-1, agree well with the predictions of self-regulated models, and rule out phenomenological 'light bulb' or pure exponential models, as well as gas starvation models, at high significance (∼5σ). We also compare with observations of the distribution of Eddington ratios at a given AGN luminosity, and find similar good agreement (but show that these observations are much less constraining). We fit the functional form of the quasar lifetime distribution and provide these fits for use, and show how the Eddington ratio distributions place precise, tight limits on the AGN lifetimes at various luminosities, in agreement with model predictions. We compare with independent estimates of episodic lifetimes and use this to constrain the shape of the typical AGN light curve, and provide simple analytic fits to these for use in

  3. The role of bias in simulation of the Indian monsoon and its relationship to predictability

    Science.gov (United States)

    Kelly, P.

    2016-12-01

    Confidence in future projections of how climate change will affect the Indian monsoon is currently limited by- among other things-model biases. That is, the systematic error in simulating the mean present day climate. An important priority question in seamless prediction involves the role of the mean state. How much of the prediction error in imperfect models stems from a biased mean state (itself a result of many interacting process errors), and how much stems from the flow dependence of processes during an oscillation or variation we are trying to predict? Using simple but effective nudging techniques, we are able to address this question in a clean and incisive framework that teases apart the roles of the mean state vs. transient flow dependence in constraining predictability. The role of bias in model fidelity of simulations of the Indian monsoon is investigated in CAM5, and the relationship to predictability in remote regions in the "free" (non-nudged) domain is explored.

  4. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    KAUST Repository

    Schull, M. A.

    2015-03-11

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  5. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  6. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  7. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  8. Chance-constrained programming approach to natural-gas curtailment decisions

    Energy Technology Data Exchange (ETDEWEB)

    Guldmann, J M

    1981-10-01

    This paper presents a modeling methodology for the determination of optimal-curtailment decisions by a gas-distribution utility during a chronic gas-shortage situation. Based on the end-use priority approach, a linear-programming model is formulated, that reallocates the available gas supply among the utility's customers while minimizing fuel switching, unemployment, and utility operating costs. This model is then transformed into a chance-constrained program in order to account for the weather-related variability of the gas requirements. The methodology is applied to the East Ohio Gas Company. 16 references, 2 figures, 3 tables.

  9. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  10. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  11. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  12. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  13. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  14. Small-kernel, constrained least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  15. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  16. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  17. A multi-model assessment of terrestrial biosphere model data needs

    Science.gov (United States)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  18. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  19. Prediction of lithium-ion battery capacity with metabolic grey model

    International Nuclear Information System (INIS)

    Chen, Lin; Lin, Weilong; Li, Junzi; Tian, Binbin; Pan, Haihong

    2016-01-01

    Given the popularity of Lithium-ion batteries in EVs (electric vehicles), predicting the capacity quickly and accurately throughout a battery's full life-time is still a challenging issue for ensuring the reliability of EVs. This paper proposes an approach in predicting the varied capacity with discharge cycles based on metabolic grey theory and consider issues from two perspectives: 1) three metabolic grey models will be presented, including MGM (metabolic grey model), MREGM (metabolic Residual-error grey model), and MMREGM (metabolic Markov-residual-error grey model); 2) the universality of these models will be explored under different conditions (such as various discharge rates and temperatures). Furthermore, the research findings in this paper demonstrate the excellent performance of the prediction depending on the three models; however, the precision of the MREGM model is inferior compared to the others. Therefore, we have obtained the conclusion in which the MGM model and the MMREGM model have excellent performances in predicting the capacity under a variety of load conditions, even using few data points for modeling. Also, the universality of the metabolic grey prediction theory is verified by predicting the capacity of batteries under different discharge rates and different temperatures. - Highlights: • The metabolic mechanism is introduced in a grey system for capacity prediction. • Three metabolic grey models are presented and studied. • The universality of these models under different conditions is assessed. • A few data points are required for predicting the capacity with these models.

  20. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.