WorldWideScience

Sample records for model sm predictions

  1. The simultaneous mass and energy evaporation (SM2E) model.

    Science.gov (United States)

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  2. Strategic Material Shortfall Risk Mitigation Optimization Model (OPTIM-SM)

    Science.gov (United States)

    2013-04-01

    contracts, could be added to the existing mix . Market 40 responses to supply and demand shocks could be modeled more explicitly as could...Model (OPTIM-SM) James S. Thomason, Project Leader D. Sean Barnett James P. Bell Jerome Bracken Eleanor L. Schwartz INSTITUTE FOR DEFENSE ANALYSES 4850...Risk Mitigation Optimization Model (OPTIM-SM) James S. Thomason, Project Leader D. Sean Barnett James P. Bell Jerome Bracken Eleanor L. Schwartz iii

  3. Matching uncertainties in the prediction of the Higgs boson transverse momentum in the SM and beyond

    CERN Document Server

    Bagnaschi, Emanuele

    2016-01-01

    We present the results of our recent study (arXiv:1510.08850) of the theoretical uncertainties that affect the predictions for the Higgs-boson transverse-momentum in gluon fusion when fixed- and all-order results are matched. Our investigation consists of a twofold analysis: first we present a detailed comparison of two recently introduced prescriptions for the determination of the matching scale (arXiv:1409.0531, arXiv:1505.00735), then we apply the results of these methods to three widely used matching frameworks, namely the aMC@NLO and POWHEG Monte Carlo approaches and analytic resummation. The results of our study are applied to the production of the SM Higgs boson and of the neutral Higgs bosons of the Two-Higgs-Doublet Model in a variety of scenarios.

  4. 31 CFR Appendix B to Part 208 - Model Disclosure for Use After ETA SM Becomes Available

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Model Disclosure for Use After ETA SM... FEDERAL AGENCY DISBURSEMENTS Pt. 208, App. B Appendix B to Part 208—Model Disclosure for Use After ETA SM... through a basic, low-cost account called an ETA SM. If you receive a Federal benefit, wage, salary,...

  5. 31 CFR Appendix A to Part 208 - Model Disclosure for Use Until ETA SM Becomes Available

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Model Disclosure for Use Until ETA SM... FEDERAL AGENCY DISBURSEMENTS Pt. 208, App. A Appendix A to Part 208—Model Disclosure for Use Until ETA SM... wait for a basic, low-cost account, called an ETA SM, to become available. If you do not have...

  6. Preparation and properties of the SmOx/Rh(100) model surface

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The preparation of SmOx/Rh(100) and CO adsorption on this model surface have been investigated with Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS) and temperature programmed desorption spectroscopy (TDS). The oxygen adsorption on the SmRh alloy surface leads to the aggregation of Sm on the surface. The thermal treatment of this oxidized surface induces the further agglomeration of SmOx on the Rh(100) surface. Compared with CO TDS on the clean Rh(100) surface, three additional CO desorption peaks can be observed at 176, 331 and 600 K on the SmOx/Rh(100) surface. The CO desorption peak at 176 K may originate from CO adsorbed on SmOx islands, while the appearance of the CO adsorption peaks at 331 and 600 K, depending on the oxidation state of Sm, is attributed to CO species located at the interface of SmOx/Rh(100).

  7. Transient and stationary characteristics of a packet buffer modelled as an MAP/SM/1/b system

    Directory of Open Access Journals (Sweden)

    Rusek Krzysztof

    2014-06-01

    Full Text Available A packet buffer limited to a fixed number of packets (regardless of their lengths is considered. The buffer is described as a finite FIFO queuing system fed by a Markovian Arrival Process (MAP with service times forming a Semi-Markov (SM process (MAP /SM /1/b in Kendall’s notation. Such assumptions allow us to obtain new analytical results for the queuing characteristics of the buffer. In the paper, the following are considered: the time to fill the buffer, the local loss intensity, the loss ratio, and the total number of losses in a given time interval. Predictions of the proposed model are much closer to the trace-driven simulation results compared with the prediction of the MAP /G/1/b model.

  8. The CP violation beyond the SM Higgs and theoretical predictions of electric dipole moment

    CERN Document Server

    Bian, Ligong

    2016-01-01

    The CP-violating phase may arise beyond the SM Higgs sectors. Due to the possible cancellation mechanism in the electric dipole moment (EDM) contributions mediated by the CP-violating Higgs sectors, the CP violation may escape the current and even the future constraints of the eEDM measurements. The cancellations in the quark and chromo-EDMs driven by the same sources alleviates the constraints of the neutron and diamagnetic atom EDM measurements. This property can be induced by the mass degeneracy of two heavy Higgs bosons. On the other hand, the diamagnetic atom EDM experiments can be more competitive to constrain or detect the CP-violating phases in this scenario. We explore this point in the framework of the type-II two-Higgs-doublet model and the minimal supersymmetric Standard Model.

  9. Coordinated Hard Sphere Mixture (CHaSM): A simplified model for oxide and silicate melts at mantle pressures and temperatures

    Science.gov (United States)

    Wolf, Aaron S.; Asimow, Paul D.; Stevenson, David J.

    2015-08-01

    We develop a new model to understand and predict the behavior of oxide and silicate melts at extreme temperatures and pressures, including deep mantle conditions like those in the early Earth magma ocean. The Coordinated Hard Sphere Mixture (CHaSM) is based on an extension of the hard sphere mixture model, accounting for the range of coordination states available to each cation in the liquid. By utilizing approximate analytic expressions for the hard sphere model, this method is capable of predicting complex liquid structure and thermodynamics while remaining computationally efficient, requiring only minutes of calculation time on standard desktop computers. This modeling framework is applied to the MgO system, where model parameters are trained on a collection of crystal polymorphs, producing realistic predictions of coordination evolution and the equation of state of MgO melt over a wide range of pressures and temperatures. We find that the typical coordination number of the Mg cation evolves continuously upward from 5.25 at 0 GPa to 8.5 at 250 GPa. The results produced by CHaSM are evaluated by comparison with predictions from published first-principles molecular dynamics calculations, indicating that CHaSM is accurately capturing the dominant physics controlling the behavior of oxide melts at high pressure. Finally, we present a simple quantitative model to explain the universality of the increasing Grüneisen parameter trend for liquids, which directly reflects their progressive evolution toward more compact solid-like structures upon compression. This general behavior is opposite that of solid materials, and produces steep adiabatic thermal profiles for silicate melts, thus playing a crucial role in magma ocean evolution.

  10. Distinct Topological Crystalline Phases in Models for the Strongly Correlated Topological Insulator SmB_{6}.

    Science.gov (United States)

    Baruselli, Pier Paolo; Vojta, Matthias

    2015-10-09

    SmB_{6} was recently proposed to be both a strong topological insulator and a topological crystalline insulator. For this and related cubic topological Kondo insulators, we prove the existence of four different topological phases, distinguished by the sign of mirror Chern numbers. We characterize these phases in terms of simple observables, and we provide concrete tight-binding models for each phase. Based on theoretical and experimental results for SmB_{6} we conclude that it realizes the phase with C_{k_{z}=0}^{+}=+2, C_{k_{z}=π}^{+}=+1, C_{k_{x}=k_{y}}^{+}=-1, and we propose a corresponding minimal model.

  11. Modeling hysteresis curves of anisotropic SmCoFeCuZr magnets

    Energy Technology Data Exchange (ETDEWEB)

    Sampaio da Silva, Fernanda A. [Programa de Pos-Graduacao em Engenharia Metalurgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420, 27255-125 Volta Redonda, RJ (Brazil); Castro, Nicolau A. [Instituto de Pesquisas Tecnologicas, Sao Paulo, SP (Brazil); Campos, Marcos F. de, E-mail: mcampos@metal.eeimvr.uff.br [Programa de Pos-Graduacao em Engenharia Metalurgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420, 27255-125 Volta Redonda, RJ (Brazil)

    2013-02-15

    The hysteresis curves at room temperature and at 630 K of an anisotropic magnet were successfully modeled with the Stoner-Wohlfarth Callen-Liu-Cullen (SW-CLC) model. This implies that coherent rotation of domains is the reversal mechanism in this magnet. The chemical composition of the evaluated magnet is Sm(Co{sub bal}Fe{sub 0.06}Cu{sub 0.108}Zr{sub 0.03}){sub 7.2}. The anisotropy field H{sub A} was estimated with the model, resulting {mu}{sub 0}H{sub A}=7.1 T at the room temperature, and 2.9 T at 630 K. For this sample, the CLC interaction parameter (1/d) is very low (near zero) and, thus, the nanocrystalline 2:17 grains are well 'magnetically decoupled'. The texture analysis using Schulz Pole figure data indicated M{sub r}/M{sub s} ratio=0.96, and this means that the magnet is very well aligned. The excellent alignment of the grains is one of the reasons for the high coercivity of this sample ({approx}4 T at room temperature). - Highlights: Black-Right-Pointing-Pointer The Stoner-Wohlfarth model can describe the hysteresis curves of anisotropic Sm2Co17 magnets, since the Callen-Liu-Cullen correction is applied. Black-Right-Pointing-Pointer The anisotropy field of the hard magnetic phase Sm2Co17 can be estimated from the hysteresis curves of anisotropic magnets, since the crystallographic texture is known. Black-Right-Pointing-Pointer It is presented a texture study of commercial sintered Sm2Co17 type magnets. Black-Right-Pointing-Pointer The texture data can be used for evaluation of the squareness of the 2nd quadrant of the hysteresis curve, in Sm2Co17 hard coercivity magnets.

  12. $H^0 \\rightarrow Z^0 \\gamma$ channel in ATLAS. \\\\ A study of the Standard Model and \\\\ Minimal Supersymmetric SM case

    CERN Document Server

    Kiourkos, S

    1999-01-01

    One of the potentially accessible decay modes of the Higgs boson in the mass region $100 < m_H < 180$ GeV is the $H^0 \\rightarrow Z^0 \\gamma$ channel. The work presented in this note examines the Standard Model and Minimal Supersymmetric Standard Model predictions for the observability of this channel using particle level simulation as well as the ATLAS fast simulation (ATLFAST). It compares present estimates for the signal observability with previously reported ones in \\cite{unal} specifying the changes arising from the assumed energy of the colliding protons and the improvements in the treatment of theoretical predictions. With the present estimates, the expected significance for the SM Higgs does not exceed, in terms of $\\frac{S}{\\sqrt{B}}$, 1.5 $\\sigma$ (including $Z^0 \\rightarrow e^+ e^-$ and $Z^0 \\rightarrow {\\mu}^+ {\\mu}^-$) for an integrated luminosity of $10^5$ pb$^{-1}$ therefore not favouring this channel for SM Higgs searches. Comparable discovery potential is expected at most for the MSSM $...

  13. Drell-Yan, ZZ, W+W- production in SM & ADD model to NLO+PS accuracy at the LHC

    CERN Document Server

    Frederix, R; Mathews, P; Ravindran, V; Seth, S

    2014-01-01

    In this paper, we present the next-to-leading order QCD corrections for di-lepton, di-electroweak boson (ZZ, W+W-) production in both the SM and the ADD model, matched to the HERWIG parton-shower using the aMC@NLO framework. A selection of results at the 8 TeV LHC, which exhibits deviation from the SM as a result of the large extra-dimension scenario are presented.

  14. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  15. Coordinated Hard Sphere Mixture (CHaSM): A fast approximate model for oxide and silicate melts at extreme conditions

    Science.gov (United States)

    Wolf, A. S.; Asimow, P. D.; Stevenson, D. J.

    2015-12-01

    Recent first-principles calculations (e.g. Stixrude, 2009; de Koker, 2013), shock-wave experiments (Mosenfelder, 2009), and diamond-anvil cell investigations (Sanloup, 2013) indicate that silicate melts undergo complex structural evolution at high pressure. The observed increase in cation-coordination (e.g. Karki, 2006; 2007) induces higher compressibilities and lower adiabatic thermal gradients in melts as compared with their solid counterparts. These properties are crucial for understanding the evolution of impact-generated magma oceans, which are dominated by the poorly understood behavior of silicates at mantle pressures and temperatures (e.g. Stixrude et al. 2009). Probing these conditions is difficult for both theory and experiment, especially given the large compositional space (MgO-SiO2-FeO-Al2O3-etc). We develop a new model to understand and predict the behavior of oxide and silicate melts at extreme P-T conditions (Wolf et al., 2015). The Coordinated Hard Sphere Mixture (CHaSM) extends the Hard Sphere mixture model, accounting for the range of coordination states for each cation in the liquid. Using approximate analytic expressions for the hard sphere model, this fast statistical method compliments classical and first-principles methods, providing accurate thermodynamic and structural property predictions for melts. This framework is applied to the MgO system, where model parameters are trained on a collection of crystal polymorphs, producing realistic predictions of coordination evolution and the equation of state of MgO melt over a wide P-T range. Typical Mg-coordination numbers are predicted to evolve continuously from 5.25 (0 GPa) to 8.5 (250 GPa), comparing favorably with first-principles Molecular Dynamics (MD) simulations. We begin extending the model to a simplified mantle chemistry using empirical potentials (generally accurate over moderate pressure ranges, compression.

  16. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  17. Shell model yrast states in the N=79 nuclei [sup 141]Sm and [sup 143]Gd

    Energy Technology Data Exchange (ETDEWEB)

    Lach, M. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Kleinheinz, P. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Blomqvist, J. (Manne Siegbahninstitutet foer Fysik, Stockholm (Sweden)); Ercan, A. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Haehn, H.J. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Wahner, D. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Julin, R. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Zupancic, M. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); Cigoroglu, F. (Institut fuer Kernphysik, Forschungszentrum Juelich (Germany)); De Angelis, G.

    1993-06-01

    In ([alpha],5n) in-beam [gamma]- and electron measurements we have established high-spin states in the 3-neutronhole nuclei [sup 141]Sm and [sup 143]Gd. Observed energy levels below 4 MeV are interpreted in terms of shell model configurations where the odd h[sub 11/2] neutron hole couples to the N=80 core excitations containing one h[sub 11/2] proton particle or one h[sub 11/2] neutron hole. Higher-lying yrast states are built on a 23/2[sup -] level at approx 3.2 MeV probably involving the [pi]h[sup 2][sub 11/2][nu]h[sup -1][sub 11/2] configuration. (orig.)

  18. Safety and feasibility of percutaneous vertebroplasty with radioactive {sup 153}Sm PMMA in an animal model

    Energy Technology Data Exchange (ETDEWEB)

    Lu Jun [Department of Radiotherapy, Xijing Hospital, Fourth Military Medical University, 15 West Changle Road, Xi' an 710032, Shaanxi Province (China); Deng Jinglan, E-mail: dengjinglan@gmail.com [Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, 15 West Changle Road, Xi' an 710032, Shaanxi Province (China); Zhao Haitao [Department of Radiology, Xijing Hospital, Fourth Military Medical University, 15 West Changle Road, Xi' an 710032, Shaanxi Province (China); Shi Mei [Department of Radiotherapy, Xijing Hospital, Fourth Military Medical University, 15 West Changle Road, Xi' an 710032, Shaanxi Province (China); Wang Jing [Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, 15 West Changle Road, Xi' an 710032, Shaanxi Province (China); Zhao Lina [Department of Radiotherapy, Xijing Hospital, Fourth Military Medical University, 15 West Changle Road, Xi' an 710032, Shaanxi Province (China)

    2011-05-15

    Purpose: We investigated the safety and feasibility of the combination of samarium-153-ethylenediamine tetramethylene phosphonate ({sup 153}Sm-EDTMP)-incorporated bone cement (BC) with percutaneous vertebroplasty (PVP) in dogs. Methods and materials: {sup 153}Sm-EDTMP-incorporated BC was prepared by combining solid {sup 153}Sm-EDTMP and polymethylmethacrylate (PMMA) immediately before PVP. It was then injected into the vertebrae of four healthy mongrel dogs (two males and two females) by PVP under CT guidance. Each dog was subjected to five PVP sessions at a {sup 153}Sm-EDTMP dose of 30-70 mCi. The suppressive effect of local injection of {sup 153}Sm-EDTMP on the hematopoietic system was evaluated through counting of peripheral blood cells. Distribution of {sup 153}Sm-EDTMP-incorporated BC and the status of tissues adjacent to injected vertebrae were evaluated with SPECT, CT and MRI. Histopathology was carried out to assess the influence of PVP on the vertebra and adjacent tissues at the microscopic level. Results: PVP was done successfully, and all dogs exhibited normal behavior and stable physical signs after procedures. {sup 153}Sm-EDTMP-incorporated BC was concentrated mainly in target vertebrae, and the peripheral blood cells remained within normal range. The spinal cord and tissues around BC did not exhibit signs of injury even when the dosage of {sup 153}Sm-EDTMP increased from 30 mCi to 70 mCi. Conclusion: A dose lower than 70 mCi of {sup 153}Sm is safe when it was injected into vertebrae. {sup 153}Sm-EDTMP-incorporated BC did not influence the effect of PVP. This means might strengthen anti-tumor activity locally for vertebra with osseous metastasis without damaging adjacent tissues.

  19. Determining age of Pan African metamorphism using Sm-Nd garnet-whole rock geochronology and phase equilibria modeling in the Tasriwine ophiolite, Sirwa, Anti-Atlas Morocco

    Science.gov (United States)

    Inglis, Jeremy D.; Hefferan, Kevin; Samson, Scott D.; Admou, Hassan; Saquaque, Ali

    2017-03-01

    Sm-Nd garnet-whole rock geochronology and phase equilibria modeling have been used to determine the age and conditions of regional metamorphism within the Tasriwine ophiolite complex, Sirwa, Morocco. Pressure and temperature estimates obtained using a NaCaKFMASHT phase diagram (pseudosection) and garnet core and rim compositions predict that garnet growth began at ∼0.72 GPa and ∼615 °C and ended at ∼0.8 GPa and ∼640 °C. A bulk garnet Sm-Nd age of 647.2 ± 1.7 Ma, calculated from a four point isochron that combines whole rock, garnet full dissolution and two successively more aggressive partial dissolutions, provides a precise date for garnet formation and regional metamorphism. The age is over 15 million years younger than a previous age estimate of regional metamorphism of 663 ± 13 Ma based upon a SHRIMP U-Pb date from rims on zircon from the Iriri migmatite. The new data provide further constraints on the age and nature of regional metamorphism in the Anti-Atlas mountains and emphasizes that garnet growth during regional metamorphism may not necessarily coincide with magmatism/anatexis which predominate the signature witnessed by previous U-Pb studies. The ability to couple PT estimates for garnet formation with high precision Sm-Nd geochronology highlights the utility of garnet studies for uncovering the detailed metamorphic history of the Anti-Atlas mountain belt.

  20. Early Start Denver Model - intervention for de helt små børn med autisme

    DEFF Research Database (Denmark)

    Brynskov, Cecilia

    2015-01-01

    Early Start Denver Model (ESDM) er en autismespecifik interventionsmetode, som er udviklet til helt små børn med autisme (0-4 år). Metoden fokuserer på at styrke den tidlige kontakt og barnets motivation, og den arbejder målrettet med de socio-kommunikative forløbere for sprog og med den tidlige...

  1. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  2. Present Limits on the Precision of SM Predictions for Jet Energies

    Energy Technology Data Exchange (ETDEWEB)

    Paramonov, A.A.; /Argonne; Canelli, F.; /Chicago U., EFI; D' Onofrio, M.; /Liverpool U.; Frisch, H.J.; /Chicago U., EFI; Mrenna, S.; /Fermilab

    2010-08-01

    We investigate the impact of theoretical uncertainties on the accuracy of measurements involving hadronic jets. The analysis is performed using events with a Z boson and a single jet observed in p{bar p} collisions at {radical}s = 1.96 TeV in 4.6 fb{sup -1} of data from the Collider Detector at Fermilab (CDF). The transverse momenta (p{sub T}) of the jet and the boson should balance each other due to momentum conservation in the plane transverse to the direction of the p and {bar p} beams. We evaluate the dependence of the measured p{sub T}-balance on theoretical uncertainties associated with initial and final state radiation, choice of renormalization and factorization scales, parton distribution functions, jet-parton matching, calculations of matrix elements, and parton showering. We find that the uncertainty caused by parton showering at large angles is the largest amongst the listed uncertainties. The proposed method can be re-applied at the LHC experiments to investigate and evaluate the uncertainties on the predicted jet energies. The distributions produced at the CDF environment are intended for comparison to those from modern event generators and new tunes of parton showering.

  3. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Science.gov (United States)

    Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-01-01

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415

  4. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  5. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  6. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  7. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  8. Theoretical investigation of antiferroelectric (SmCA*) subphases by hydrodynamical approach

    Science.gov (United States)

    Lahiri, T.; Pal Majumder, T.

    2011-12-01

    We provide a hydrodynamical approach utilizing time dependent Landau-Ginzburg model (L-G) and the Cahn-Hilliard model (C-H) to investigate antiferroelectric liquid crystals (AFLCs) exhibiting different chiral phases between paraelectric smectic A (SmA*) phase and antiferroelectric smectic CA* phase (SmCA*). Introducing conserved and non-conserved order parameters in C-H and L-G models, we have predicted the appearance of a chiral smectic C (SmC*) phase and a ferrielectric SmCFI1* phase (three layers SmCA*) in an antiferroelectric phase sequence. The three layers periodicity for SmCFI1* phase is studied in detail with a non-uniform layer interactions among smectic layers with strong experimental support. Finally, we provide some theoretical basis for the non-uniformity of our proposed layer interactions.

  9. Low-temperature (1 K) angle-resolved photoemission investigation of the predicted topological Kondo insulator behavior of SmB6

    Science.gov (United States)

    Rader, Oliver; Hlawenka, Peter; Rienks, Emile; Siemensmeyer, Konrad; Weschke, Eugen; Varykhalov, Andrei; Shitsevalova, Natalya; Gabani, Slavomir; Flachbart, Karol

    2015-03-01

    The system SmB6 is known for its unusual resistivity which increases exponentially with decreasing temperature and saturates below 3 K. This has recently been attributed to topological-Kondo-insulator behavior where a topological surface state is created by Sm 4 f - 5 d hybridization and is responsible for the transport. Local-density-approximation + Gutzwiller calculations of the (100) surface predict the appearance of three Dirac cones in the surface Brillouin zone. We perform angle-resolved photoemission at temperatures below 1 K and reveal surface states at Γ and X . Bulk conduction band states near X appear at higher temperature. These findings will be discussed in detail vis-á-vis the theoretical and experimental literature.

  10. A tunneling model for afterglow suppression in CsI:Tl,Sm scintillation materials

    Energy Technology Data Exchange (ETDEWEB)

    Kappers, L.A., E-mail: lawrence.kappers@uconn.ed [Department of Physics, University of Connecticut, 2152 Hillside Road, Storrs, CT 06269-3046 (United States); Bartram, R.H.; Hamilton, D.S. [Department of Physics, University of Connecticut, 2152 Hillside Road, Storrs, CT 06269-3046 (United States); Lempicki, A.; Brecher, C. [ALEM Associates, 303 Commonwealth Avenue, Boston, MA 02115 (United States); Gaysinskiy, V.; Ovechkina, E.E.; Thacker, S.; Nagarkar, V.V. [Radiation Monitoring Devices (RMD) Inc., 44 Hunt St., Watertown, MA 02472 (United States)

    2010-03-15

    Combined radioluminescence, afterglow and thermoluminescence experiments on single-crystal samples of co-doped CsI:Tl,Sm suggest that samarium electron traps scavenge electrons from thallium traps and that electrons subsequently released by samarium recombine non-radiatively with trapped holes, thus suppressing afterglow. Experiments on single crystals support the inference that electrons tunnel freely between samarium ions and are trapped preferentially as substitutional Sm{sup +} near V{sub KA}(Tl{sup +}) centers where non-radiative recombination is the rate-limiting step. Afterglow in microcolumnar films of CsI:Tl,Sm is enhanced by inhomogeneities which impede tunneling between samarium ions, but is partly suppressed by annealing.

  11. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  12. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  13. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  14. Updated NLL Results for $B \\to (Xs,Xd) \\gamma$ in and beyond the SM

    CERN Document Server

    Hurth, Tobias; Porod, Werner

    2004-01-01

    We present general model-independent formulae for the branching ratios and the direct tagged CP asymmetries for the inclusive B -> Xd gamma and B -> Xs gamma modes. We also update the corresponding SM predictions.

  15. Simplified DM models with the full SM gauge symmetry : the case of $t$-channel colored scalar mediators

    CERN Document Server

    Ko, P; Park, Myeonghun; Yokoya, Hiroshi

    2016-01-01

    The general strategy for dark matter (DM) searches at colliders currently relies on simplified models. In this paper, we propose a new $t$-channel UV-complete simplified model that improves the existing simplified DM models in two important respects: (i) we impose the full SM gauge symmetry including the fact that the left-handed and the right-handed fermions have two independent mediators with two independent couplings, and (ii) we include the renormalization group evolution when we derive the effective Lagrangian for DM-nucleon scattering from the underlying UV complete models by integrating out the $t$-channel mediators. The first improvement will introduce a few more new parameters compared with the existing simplified DM models. In this study we look at the effect this broader set of free parameters has on direct detection and the mono-$X$ + MET ($X$=jet,$W,Z$) signatures at 13 TeV LHC while maintaining gauge invariance of the simplified model under the full SM gauge group. We find that the direct detect...

  16. SmEdA vibro-acoustic modelling in the mid-frequency range including the effect of dissipative treatments

    Science.gov (United States)

    Hwang, H. D.; Maxit, L.; Ege, K.; Gerges, Y.; Guyader, J.-L.

    2017-04-01

    Vibro-acoustic simulation in the mid-frequency range is of interest for automotive and truck constructors. The dissipative treatments used for noise and vibration control such as viscoelastic patches and acoustic absorbing materials must be taken into account in the problem. The Statistical modal Energy distribution Analysis (SmEdA) model consists in extending Statistical Energy Analysis (SEA) to the mid-frequency range by establishing power balance equations between the modes of the different subsystems. The modal basis of uncoupled-subsystems that can be estimated by the finite element method in the mid-frequency range is used as input data. SmEdA was originally developed by considering constant modal damping factors for each subsystem. However, this means that it cannot describe the local distribution of dissipative materials. To overcome this issue, a methodology is proposed here to take into account the effect of these materials. This methodology is based on the finite element models of the subsystems that include well-known homogenized material models of dissipative treatments. The Galerkin method with subsystem normal modes is used to estimate the modal damping loss factors. Cross-modal coupling terms which appear in the formulation due to the dissipative materials are assumed to be negligible. An approximation of the energy sharing between the subsystems damped by dissipative materials is then described by SmEdA. The different steps of the method are validated experimentally by applying it to a laboratory test case composed of a plate-cavity system with different configurations of dissipative treatments. The comparison between the experimental and the simulation results shows good agreement in the mid-frequency range.

  17. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  18. Community health workers speak out about the Kin KeeperSM model.

    Science.gov (United States)

    Mousa, Shimaa M; Brooks, Emily; Dietrich, Monika; Henderson, Aisha; McLean, Casey; Patricia Williams, Karen

    2010-06-01

    Community health workers (CHWs) informed students and researcher alike on the Kin Keeper(SM) Cancer Prevention Intervention. Students interested in medicine, guided by faculty, conducted a focus group session with 13 CHWs to find out if the intervention was effective for delivering breast and cervical cancer education. Strengths reported were (1) cultural appropriateness, (2) home visits, (3) CHW resource kits, and (4) increased awareness. The barriers were privacy perceptions and scheduling home visits. Overall, the CHWs indicated that the intervention was effective and flexible enough to accommodate the African American, Latina, and Arab groups of women.

  19. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  20. Combined SM Higgs Limits at the Tevatron

    Energy Technology Data Exchange (ETDEWEB)

    Krumnack, N.

    2009-10-01

    We combine results from CDF and D{sup 0} on direct searches for a standard model (SM) Higgs boson (H) in p{bar p} collisions at the Fermilab Tevatron at {radical}s = 1.96 TeV. Compared to the previous Higgs Tevatron combination, more data and new channels WH {yields} {tau}{nu}b{bar b}, VH {yields} {tau}{tau}b{bar b}/jj{tau}{tau}, VH {yields} jjb{bar b}, t{bar t}H {yields} t{bar t}b{bar b} have been added. Most previously used channels have been reanalyzed to gain sensitivity. We use the latest parton distribution functions and gg {yields} H theoretical cross sections when comparing our limits to the SM predictions. With 2.0-3.6 fb{sup -1} of data analyzed at CDF, and 0.9-4.2 fb{sup -1} at D{sup 0}, the 95% C.L. upper limits on Higgs boson production are a factor of 2.5 (0.86) times the SM cross section for a Higgs boson mass of m{sub H} = 115 (165) GeV/c{sup 2}. Based on simulation, the corresponding median expected upper limits are 2.4 (1.1). The mass range excluded at 95% C.L. for a SM Higgs has been extended to 160 < m{sub H} < 170 GeV/c{sup 2}.

  1. SM*A*S*H

    CERN Document Server

    Ringwald, Andreas

    2016-01-01

    We present a minimal model for particle physics and cosmology. The Standard Model (SM) particle content is extended by three right-handed SM-singlet neutrinos N_i and a vector-like quark Q, all of them being charged under a global lepton number and Peccei-Quinn (PQ) U(1) symmetry which is spontaneously broken by the vacuum expectation value v_sigma around 10^{11} GeV of a SM-singlet complex scalar field sigma. Five fundamental problems -- neutrino oscillations, baryogenesis, dark matter, inflation, strong CP problem -- are solved at one stroke in this model, dubbed "SM*A*S*H" (Standard Model*Axion*Seesaw*Higgs portal inflation). It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.

  2. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  3. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  4. Prediction of Henry's law constants of triazine derived herbicides from quantum chemical continuum solvation models.

    Science.gov (United States)

    Delgado, Eduardo J; Alderete, Joel B

    2003-01-01

    The Henry's law constants (H) for triazine derived herbicides are calculated using quantum chemical solvation models, SM2, SM3, PCM-DFT, and CPCM-DFT, and their performances are discussed. The results show considerable differences in performance among the different levels of theory. The values of H calculated by the semiempirical methods agree much better with the experimental values than those obtained at the DFT level. The differences are discussed in terms of the different contributions, electrostatic and no-electrostatic, to Gibbs free energy of solvation. In addition, the Henry's law constants of some triazine derived herbicides whose values have not been reported earlier are predicted as well.

  5. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  7. 嗜麦芽窄食单胞菌噬菌体SM1生物学特征及其在感染动物模型中的疗效%Biological characteristics of phage SM1 for Stenotrophomonas maltophilia and its effect in animal infection model

    Institute of Scientific and Technical Information of China (English)

    张劼; 李秀娟

    2013-01-01

    Objective:To investigate the biological characteristics of phage SM1 for stenotrophomonas maltophilia (Sm) and its effect in animal infection model.Methods:Phage SM1 isolated from raw sewage of hospital was identified by the plaque method.The morphology of phage SM1 was observed by electronmicrographics with negative staining.The extraction and electrophoresis of phage SM1 DNA were performed.Optimal multiplicity of infection,resistant mutation rate,one step growth curve and the effectiveness in animal models of phage SM1 were determined.Results:One Sm specific phage of myoviridae double-stranded DNA was identified,and named SM1.Electrophoresis of DNA demonstrated that the size of phage SM1 genome was about 50 kb.The growth curve of phage SM1 showed that the durations of incubation and burst period were 15 min and 50 min,respectively; and the burst size was 187.The resistant mutation rate of phage SM1 was 6 × 10-10.All mice treated with phage SM1 survived after 7 d of infection with stenotrophomonas maltophilia.Conclusions:The phage SM1 has a relatively broad host range,a shorter incubation period,an apparent burst size and a lower resistant mutation rate.The therapy of phage SM1 for Sm infection in mice is effective.%目的:通过分离嗜麦芽窄食单胞菌(Sm)噬菌体,研究噬菌体SM1的生物学特征及其在感染动物模型中的疗效,为噬菌体治疗Sm感染提供实验依据.方法:利用噬菌斑法从医院污水中分离噬菌体SM1,电镜观察其形态特征,并提取基因组行酶切电泳;测定噬菌体SM1最佳感染复数、耐受突变率、一步生长曲线及其在感染动物模型中的作用.结果:筛选出1株具有较宽裂解谱的双链DNA肌尾噬菌体SM1,基因组大小约50 kb;其感染宿主菌的潜伏期为15 min,爆发期为50 min,裂解量为187,耐受突变率为6×10-10;噬菌体SM1治疗组小鼠7d后全部存活.结论:噬菌体SM1具有裂解谱较宽、潜伏期短、裂解量大及耐受突变率低的

  8. Probing lipid-cholesterol interactions in DOPC/eSM/Chol and DOPC/DPPC/Chol model lipid rafts with DSC and (13)C solid-state NMR.

    Science.gov (United States)

    Fritzsching, Keith J; Kim, Jihyun; Holland, Gregory P

    2013-08-01

    The interaction between cholesterol (Chol) and phospholipids in bilayers was investigated for the ternary model lipid rafts, DOPC/eSM/Chol and DOPC/DPPC/Chol, with differential scanning calorimetry (DSC) and (13)C cross polarization magic angle spinning (CP-MAS) solid-state NMR. The enthalpy and transition temperature (Tm) of the Lα liquid crystalline phase transition from DSC was used to probe the thermodynamics of the different lipids in the two systems as a function of Chol content. The main chain (13)C (CH2)n resonance is resolved in the (13)C CP-MAS NMR spectra for the unsaturated (DOPC) and saturated (eSM or DPPC) chain lipid in the ternary lipid raft mixtures. The (13)C chemical shift of this resonance can be used to detect differences in chain ordering and overall interactions with Chol for the different lipid constituents in the ternary systems. The combination of DSC and (13)C CP-MAS NMR results indicate that there is a preferential interaction between SM and Chol below Tm for the DOPC/eSM/Chol system when the Chol content is ≤20mol%. In contrast, no preferential interaction between Chol and DPPC is observed in the DOPC/DPPC/Chol system above or below Tm. Finally, (13)C CP-MAS NMR resolves two Chol environments in the DOPC/eSM/Chol system below Tm at Chol contents >20mol% while, a single Chol environment is observed for DOPC/DPPC/Chol at all compositions.

  9. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  10. Distinguishing a SM-like MSSM Higgs boson from SM Higgs boson at muon collider

    Indian Academy of Sciences (India)

    Jai Kumar Singhal; Sardar Singh; Ashok K Nagawat

    2007-06-01

    We explore the possibility of distinguishing the SM-like MSSM Higgs boson from the SM Higgs boson via Higgs boson pair production at future muon collider. We study the behavior of the production cross-section in SM and MSSM with Higgs boson mass for various MSSM parameters tan and A. We observe that at fixed CM energy, in the SM, the total cross-section increases with the increase in Higgs boson mass whereas this trend is reversed for the MSSM. The changes that occur for the MSSM in comparison to the SM predictions are quantified in terms of the relative percentage deviation in cross-section. The observed deviations in cross-section for different choices of Higgs boson masses suggest that the measurements of the cross-section could possibly distinguish the SM-like MSSM Higgs boson from the SM Higgs boson.

  11. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  12. Search for SM deviations in top precision studies at CMS

    CERN Document Server

    Skovpen, Kirill

    2017-01-01

    Precision studies of top quark properties provide a unique playground to test the predictions of the standard model and to search for new physics. Reviewed results from the CMS experiment done with the data collected at 8 TeV include studies of top quark Wtb anomalous and FCNC couplings, polarization, CP-violation and spin correlation effects. No significant deviations from the SM predictions are observed.

  13. Micromagnetic Simulation of Magnetization Reversal in SmCo5/Sm2Co17 Magnets

    Institute of Scientific and Technical Information of China (English)

    荣传兵; 张宏伟; 杜晓波; 张健; 张绍英; 沈保根

    2004-01-01

    A three-dimensional finite element micromagnetic algorithm was developed to study the magnetization reversal of the SmCo5/Sm2Co17 based magnets. The influences of the microstructure and magnetic parameters on the coercivity were studied based on the model consisting of 64 irregular cells according to the experimental microstructure. Numerical results show that the coercivity increases with increasing the 2∶17-type cell size. Large cell boundary thickness leads to small coercivity. The drop of anisotropy constant of 1∶5 phase leads to the coercivity reducing, while the effect of exchange constant of 1∶5 phase on coercivity is contrary to that of exchange constant. The calculated field dependence of coercivity can be predicted by an inhomogeneous domain-wall pinning model. The microstructure parameter was analyzed by comparing the calculated coercivity.

  14. Experimental and modeling studies on phase stability of nanocrystalline magnetic Sm{sub 2}Co{sub 7}

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Wenwu; Song, Xiaoyan, E-mail: xysong@bjut.edu.cn; Zhang, Zhexu; Liang, Haining

    2013-09-01

    Highlights: • Low-Co Sm–Co alloy as new candidate for permanent magnets was proposed. • Relationship between the phase stability and grain size was quantified for Sm{sub 2}Co{sub 7}. • The work contributes to the development of nanostructured Sm–Co permanent magnetic materials. -- Abstract: In contrast to the conventional polycrystalline low-Co Sm–Co alloys that have very weak permanent magnetic properties, the Sm{sub 2}Co{sub 7} alloy has been found to have fairly promising permanent magnetic performance when its grain size is reduced to the nanoscale. It was discovered that the crystal structure of the nanocrystalline Sm{sub 2}Co{sub 7} has a strong nanograin-size-dependent stability. The rhombohedral structure of Sm{sub 2}Co{sub 7} phase which is metastable at temperatures lower than 1435 K in conventional polycrystalline system can exist stably at room temperature in the nanocrystalline system. To understand the phase stability of the nanocrystalline Sm{sub 2}Co{sub 7}, the experimental and nanothermodynamic analyses were combined to describe quantitatively the phase transformation behavior of Sm{sub 2}Co{sub 7} on the nanoscale. The results are important for the development of nanostructured Sm–Co permanent magnets.

  15. Estimate of the anisotropy field in isotropic SmCo 2:17 magnets with the Stoner-Wohlfarth CLC model

    Energy Technology Data Exchange (ETDEWEB)

    De Campos, M F [PUVR- Universidade Federal Fluminense, Av dos Trabalhadores 420, Vila Santa Cecilia, Volta Redonda, RJ, 27255-125 (Brazil); Romero, S A [Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo, SP (Brazil); Landgraf, F J G [Escola Politecnica, Universidade de Sao Paulo, Sao Paulo, SP (Brazil); Missell, F P, E-mail: fmissell@yahoo.com, E-mail: mcampos@metal.eeimvr.uff.br [Centro de Ciencias Exatas e Tecnologia, Universidade de Caxias do Sul, Caxias do Sul, RS, 95070-560 Brazil (Brazil)

    2011-07-06

    The Callen-Liu-Cullen (CLC) modification of the Stoner-Wohlfarth model was found able to describe properly the hysteresis curves of isotropic Sm(CoFeCuZr)z magnets. The SW-CLC model uses three parameters, and all of them have physical meaning. One of the parameters is related to the saturation magnetization, another to the anisotropy field, and another is 1/d, which evaluates the interaction between grains or particles. The model was applied for several magnets, indicating an anisotropy field of 6-7 T, which is compatible with other methods for anisotropy field determination. The model also gives insight into the abnormal temperature dependence of the coercivity found in SmCo 2:17 magnets. For compositions with a low z, the parameter 1/d is significant. These compositions with a low z are those showing the most abnormal coercivity behavior with temperature.

  16. Casting light on BSM physics with SM standard candles

    Science.gov (United States)

    Curtin, David; Jaiswal, Prerit; Meade, Patrick; Tien, Pin-Ju

    2013-08-01

    The Standard Model (SM) has had resounding success in describing almost every measurement performed by the ATLAS and CMS experiments. In particular, these experiments have put many beyond the SM models of natural Electroweak Symmetry Breaking into tension with the data. It is therefore remarkable that it is still the LEP experiment, and not the LHC, which often sets the gold standard for understanding the possibility of new color-neutral states at the electroweak (EW) scale. Recently, ATLAS and CMS have started to push beyond LEP in bounding heavy new EW states, but a gap between the exclusions of LEP and the LHC typically remains. In this paper we show that measurements of SM Standard Candles can be repurposed to set entirely complementary constraints on new physics. To demonstrate this, we use W + W -cross section measurements to set bounds on a set of slepton-based simplified models which fill in the gaps left by LEP and dedicated LHC searches. Having demonstrated the sensitivity of the W + W -measurement to light sleptons, we also find regions where sleptons can improve the fit of the data compared to the NLO SM W + W -prediction alone. Remarkably, in those regions the sleptons also provide for the right relic-density of Bino-like Dark Matter and provide an explanation for the longstanding 3 σ discrepancy in the measurement of ( g - 2) μ.

  17. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  18. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  19. Småhuse

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Denne SBI-anvisning indeholder vejledning om dimensionering og udførelse af almindelige småhuse med indtil 2 etager i overensstemmelse med kravene i Bygningsreglement for småhuse, 1998. Anvisningen behandler varmeisolering, fugtisolering, lydisolering, brand, vådrum, indeklima samt styrke og...

  20. Probing metastable Sm2+ and optically stimulated tunnelling emission in YPO4: Ce, Sm

    DEFF Research Database (Denmark)

    Prasad, Amit Kumar; Kook, Myung Ho; Jain, Mayank

    2017-01-01

    When the model dosimetry system YPO4: Ce3+, Sm3+ is exposed to X-rays, the charge state of the dopants changes, becoming Ce4+ and Sm2+ via hole and electron trapping, respectively which are metastable; the original charge states can be achieved through electron transfer back from Sm2+ to Ce4+ via...

  1. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  2. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  3. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  4. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  5. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  6. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  7. Draft Genome Sequences of Stenotrophomonas maltophilia Strains Sm32COP, Sm41DVV, Sm46PAILV, SmF3, SmF22, SmSOFb1, and SmCVFa1, Isolated from Different Manures in France

    Science.gov (United States)

    Bodilis, Josselin; Youenou, Benjamin; Briolay, Jérome; Brothier, Elisabeth; Favre-Bonté, Sabine

    2016-01-01

    Stenotrophomonas maltophilia is a major opportunistic human pathogen responsible for nosocomial infections. Here, we report the draft genome sequences of Sm32COP, Sm41DVV, Sm46PAILV, SmF3, SmF22, SmSOFb1, and SmCVFa1, isolated from different manures in France, which provide insights into the genetic determinism of intrinsic or acquired antibiotic resistance in this species. PMID:27540065

  8. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  9. Order Lambda**3 Parameterization of Neutrino (e,muon, tau, tau(prime)) Flavor Oscillations in a Simplified SM4 Model and Associated VCKM, VMNS, and VBIMAX Matrixes

    CERN Document Server

    Makowitz, Dr Henry

    2009-01-01

    Arguments are presented based on particle phenomenology and the requirement for Unitarity for a complex valued postulated four generation CKM Matrix (VCKM ) based on a Sequential Fourth Generation Model (sometimes named SM4). A modified four generation QCD Standard Model Lagrangian is utilized per SM4. A four generation neutrino mass mixing MNS Matrix, (VMNS) is estimated utilizing a Unitary (to Order (Lambda**k), k = 1, 2, 3, 4, etc) 4 x 4 Bimaximal Matrix, VBIMAX. The Unitary VBIMAX is based on a weighted 3 x 3 VBIMAX scheme and is studied in conjunction with the postulated four generation VCKM complex Unitary Matrix. A single parameter has been utilized in our analysis along with three complex DELTA(i,j) phases. A four generation Wolfenstein Parameterization of VCKM is deduced which is valid for order Lambda**3. Experimental implications of the model are discussed. The issues of Baryogenesis in the context of Leptogenesis associated with MNS Matrix neutrino mixing and Baryogenesis associated with CKM Matri...

  10. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  11. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  12. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  13. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  14. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  15. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  16. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  9. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  10. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  11. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  12. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  13. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  14. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  15. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  16. Research on the magnetic material of Sm-Fe matrix nitrides

    Institute of Scientific and Technical Information of China (English)

    CUI Chunxiang; SUN Jibing; ZHANG Ying; WANG Ru; LI Lin; LIANG Zhimei

    2005-01-01

    In this paper, the types of Sm-Fe matrix compounds and their correlations are introduced, and progress of research on the magnetic materials of Sm-Fe matrix nitrides is also reviewed. Possible research trends of future permanent magnetic materials of SmFe matrix nitrides are briefly predicted.

  17. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  18. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  19. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  20. Little Higgs model effects in →

    Indian Academy of Sciences (India)

    S Rai Choudhury; Ashok Goyal; A S Cornell; Naveen Gaur

    2007-11-01

    Though the predictions of the standard model (SM) are in excellent agreement with experiments, there are still several theoretical problems associated with the Higgs sector of the SM, where it is widely believed that some new physics will take over at the TeV scale. One beyond the SM theory which resolves these problems is the Little Higgs (LH) model. In this work we have investigated the effects of the LH model on → scattering [1].

  1. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  2. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  3. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  4. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  5. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  6. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  7. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  8. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  10. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  11. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  12. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  13. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  14. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  15. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  16. Activation cross sections and isomeric ratios in reactions induced by 14. 5 MeV neutrons on [sup 152]Sm, [sup 154]Sm and [sup 178]Hf

    Energy Technology Data Exchange (ETDEWEB)

    Kirov, A. (Washington Univ., St. Louis, MO (United States). Dept. of Chemistry); Nenoff, N.; Georgieva, E.; Necheva, C. (Sofia Univ. (Bulgaria). Atomic Physics Dept.); Ephtimov, I. (IZR, Kostinbrod (Bulgaria))

    1993-05-01

    Cross sections for the reactions [sup 152]Sm(n, p)[sup 152g,m1,m2P]m, [sup 154]Sm(n, p)[sup 154g,m]Pm, [sup 178]Hf(n, p)[sup 178m,g]Lu, [sup 154]Sm(n, d)[sup 153]Pm and [sup 152]Sm(n, [alpha])[sup 149]Nd were measured at 14.5 meV neutron energy by the activation method. On the basis of these cross sections, the associated isomeric ratios in [sup 154]Pm, [sup 152]Pm, [sup 178]Lu and the comparison with the predictions of different compound and precompound models, conclusions are drawn about the role of the preequilibrium processes in 14.5 MeV neutron induced reactions. Calculations for equal angular momentum removal by equilibrium and preequilibrium emitted particles better reproduced the experimental isomeric ratios, than for higher angular momentum removal in the preequilibrium phase. The isomeric ratios may be used as a source of additional information about the spin of the isomeric states in [sup 152]Pm and [sup 154]Pm for which the spectroscopic information is uncertain. (orig.).

  17. Activation cross sections and isomeric ratios in reactions induced by 14.5 MeV neutrons on152Sm,154Sm and178Hf

    Science.gov (United States)

    Kirov, A.; Nenoff, N.; Georgieva, E.; Necheva, C.; Ephtimov, I.

    1993-09-01

    Cross sections for the reactions152Sm( n, p)152 g,m1, m2 Pm,154Sm( n, p)154 g,m Pm,178Hf( n, p)178 m,g Lu,154Sm( n, d)153Pm and152Sm( n, α)149Nd were measured at 14.5 MeV neutron energy by the activation method. On the basis of these cross sections, the associated isomeric ratios in154Pm,152Pm,178Lu and the comparison with the predictions of different compound and precompound models, conclusions are drawn about the role of the preequilibrium processes in 14.5 MeV neutron induced reactions. Calculations for equal angular momentum removal by equilibrium and preequilibrium emitted particles better reproduced the experimental isomeric ratios, than for higher angular momentum removal in the preequilibrium phase. The isomeric ratios may be used as a source of additional information about the spin of the isomeric states in152Pm and154Pm for which the spectroscopic information is uncertain.

  18. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  19. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  20. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  1. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  2. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  3. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  4. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  5. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  6. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  7. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  8. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  9. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  10. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  11. Sm Transition Probabilities and Abundances

    CERN Document Server

    Lawler, J E; Sneden, C; Cowan, J J

    2005-01-01

    Radiative lifetimes, accurate to +/- 5%, have been measured for 212 odd-parity levels of Sm II using laser-induced fluorescence. The lifetimes are combined with branching fractions measured using Fourier-transform spectrometry to determine transition probabilities for more than 900 lines of Sm II. This work is the largest-scale laboratory study to date of Sm II transition probabilities using modern methods. This improved data set has been used to determine a new solar photospheric Sm abundance, log epsilon = 1.00 +/- 0.03, from 26 lines. The spectra of three very metal-poor, neutron-capture-rich stars also have been analyzed, employing between 55 and 72 Sm II lines per star. The abundance ratios of Sm relative to other rare earth elements in these stars are in agreement, and are consistent with ratios expected from rapid neutron-capture nucleosynthesis (the r-process).

  12. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  13. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  14. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  15. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  16. Little Higgs model effects in gg ® gg

    Science.gov (United States)

    Choudhury, S. Rai; Goyal, Ashok; Cornell, A. S.; Gaur, Naveen

    2007-11-01

    Though the predictions of the standard model (SM) are in excellent agreement with experiments, there are still several theoretical problems associated with the Higgs sector of the SM, where it is widely believed that some {new physics} will take over at the TeV scale. One beyond the SM theory which resolves these problems is the Little Higgs (LH) model. In this work we have investigated the effects of the LH model on gggg scattering [1].

  17. SM18 Visits and Access

    CERN Multimedia

    2012-01-01

      VISITS The rules and conditions to be followed for visits in the SM18 Hall are laid out in the EDMS 1205328 document. No visit is allowed without prior reservation.   ACCESS Special access right is needed ONLY from 7 p.m. to 7 a.m. and during week-ends. From 1 December, the current SM18 access database will be closed and a new one “SM18-OWH outside normal hours” started from scratch. Requests, via EDH SM18-OWH, will have to be duly justified.   For further information, please contact Evelyne Delucinge.

  18. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  19. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  20. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  1. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  2. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Energy levels in YPO{sub 4}:Ce{sup 3+},Sm{sup 3+} studied by thermally and optically stimulated luminescence

    Energy Technology Data Exchange (ETDEWEB)

    Bos, Adrie J.J., E-mail: a.j.j.bos@tudelft.n [Delft University of Technology, Mekelweg 15, NL 2629 JB Delft (Netherlands); Poolton, Nigel R.J. [Delft University of Technology, Mekelweg 15, NL 2629 JB Delft (Netherlands); Institute of Maths and Physics, Aberystwyth University, Aberystwyth SY23 3BZ (United Kingdom); Wallinga, Jakob [Netherlands Centre for Luminescence Dating, Delft University of Technology, Mekelweg 15, NL 2629 JB Delft (Netherlands); Bessiere, Aurelie [Ecole Nat. Superieure de Chimie de Paris, 11 Rue P et M Curie, 75231 Paris Cedex 05 (France); Dorenbos, Pieter [Delft University of Technology, Mekelweg 15, NL 2629 JB Delft (Netherlands)

    2010-03-15

    Energy-resolved optically stimulated luminescence (OSL) spectra and thermoluminescence (TL) glow curves of a powder sample of YPO{sub 4}:Ce{sup 3+},Sm{sup 3+} were measured to investigate the nature of the trapping centre and to locate its energy level relative to the valence and conduction bands of the YPO{sub 4} host. The high-temperature glow peak could unequivocally be assigned to Sm{sup 2+} (thus Sm{sup 3+} acts as an electron trap). The trap depth of this centre, as derived from the OSL excitation spectra, is in good agreement with the Dorenbos model prediction. The OSL excitation spectra also reveal excited states of Sm{sup 2+} well below the conduction band. These excited states produce a broadening of the high-temperature TL glow peak and also cause the activation energy determined by the Hoogenstraten method to underestimate the trap depth.

  4. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  5. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  6. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  7. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  8. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  9. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  10. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  11. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  12. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  13. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  14. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  15. 24/7 SM slavery.

    Science.gov (United States)

    Dancer, Peter L; Kleinplatz, Peggy J; Moser, Charles

    2006-01-01

    This study describes the nature of 24/7 SM slavery as practiced within the SM (sadomasochistic) community. These SM participants, who attempt to live full-time in owner-slave roles, represent a small proportion of those with SM interests. SM slaves have not been studied systematically to determine if and how they differ from other SM practitioners. An online questionnaire was used to obtain responses from individuals who self-identified as slaves. A total of 146 respondents participated, 53% female and 47% male, ranging in age from 18 to 72. We explored the depth of their relationships, how well they approximated "slavery," and how their relationships were structured to maintain distinct roles. Data showed that in long-term SM slave relationships, a power differential exists which extends beyond time-limited SM or sexual interactions. Owners and slaves often use common, daily life experiences or situations, such as the completion of household chores, money management, and morning or evening routines, to distinguish and maintain their respective roles. In addition, contrary to the perception of total submission, results revealed that slaves exercise free will when it is in their best interests to do so. These relationships were long-lasting and satisfying to the respondents.

  16. OpenSM Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    2015-04-17

    The OpenSM Monitoring System includes a collection of diagnostic and monitoring tools for use on Infiniband networks. The information this system gathers is obtained from a service, which in turn is obtained directly from the OpenSM subnet manager.

  17. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  18. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  19. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  20. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  1. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  3. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  4. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  5. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  6. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  7. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  8. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  9. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  10. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  11. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  12. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  13. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  14. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  15. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  16. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  17. Structure-function analysis and genetic interactions of the SmG, SmE, and SmF subunits of the yeast Sm protein ring.

    Science.gov (United States)

    Schwer, Beate; Kruchten, Joshua; Shuman, Stewart

    2016-09-01

    A seven-subunit Sm protein ring forms a core scaffold of the U1, U2, U4, and U5 snRNPs that direct pre-mRNA splicing. Using human snRNP structures to guide mutagenesis in Saccharomyces cerevisiae, we gained new insights into structure-function relationships of the SmG, SmE, and SmF subunits. An alanine scan of 19 conserved amino acids of these three proteins, comprising the Sm RNA binding sites or inter-subunit interfaces, revealed that, with the exception of Arg74 in SmF, none are essential for yeast growth. Yet, for SmG, SmE, and SmF, as for many components of the yeast spliceosome, the effects of perturbing protein-RNA and protein-protein interactions are masked by built-in functional redundancies of the splicing machine. For example, tests for genetic interactions with non-Sm splicing factors showed that many benign mutations of SmG, SmE, and SmF (and of SmB and SmD3) were synthetically lethal with null alleles of U2 snRNP subunits Lea1 and Msl1. Tests of pairwise combinations of SmG, SmE, SmF, SmB, and SmD3 alleles highlighted the inherent redundancies within the Sm ring, whereby simultaneous mutations of the RNA binding sites of any two of the Sm subunits are lethal. Our results suggest that six intact RNA binding sites in the Sm ring suffice for function but five sites may not.

  18. Structure–function analysis and genetic interactions of the SmG, SmE, and SmF subunits of the yeast Sm protein ring

    Science.gov (United States)

    Schwer, Beate; Kruchten, Joshua; Shuman, Stewart

    2016-01-01

    A seven-subunit Sm protein ring forms a core scaffold of the U1, U2, U4, and U5 snRNPs that direct pre-mRNA splicing. Using human snRNP structures to guide mutagenesis in Saccharomyces cerevisiae, we gained new insights into structure–function relationships of the SmG, SmE, and SmF subunits. An alanine scan of 19 conserved amino acids of these three proteins, comprising the Sm RNA binding sites or inter-subunit interfaces, revealed that, with the exception of Arg74 in SmF, none are essential for yeast growth. Yet, for SmG, SmE, and SmF, as for many components of the yeast spliceosome, the effects of perturbing protein–RNA and protein–protein interactions are masked by built-in functional redundancies of the splicing machine. For example, tests for genetic interactions with non-Sm splicing factors showed that many benign mutations of SmG, SmE, and SmF (and of SmB and SmD3) were synthetically lethal with null alleles of U2 snRNP subunits Lea1 and Msl1. Tests of pairwise combinations of SmG, SmE, SmF, SmB, and SmD3 alleles highlighted the inherent redundancies within the Sm ring, whereby simultaneous mutations of the RNA binding sites of any two of the Sm subunits are lethal. Our results suggest that six intact RNA binding sites in the Sm ring suffice for function but five sites may not. PMID:27417296

  19. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  20. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  1. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  2. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  3. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  4. Absolute Cross Sections for Proton Induced Reactions on 147,149Sm Below the Coulomb Barrier

    Science.gov (United States)

    Gheorghe, I.; Filipescu, D.; Glodariu, T.; Bucurescu, D.; Cata-Danil, I.; Cata-Danil, G.; Deleanu, D.; Ghita, D.; Ivascu, M.; Lica, R.; Marginean, N.; Marginean, R.; Mihai, C.; Negret, A.; Sava, T.; Stroe, L.; Toma, S.; Sima, O.; Sin, M.

    2014-05-01

    Cross sections for 147,149Sm(p,n)147,149Eu and 147,149Sm(p, γ)148,150Eu were measured using the activation method. The results are compared to the predictions of the Hauser-Feshbach statistical model. Different γ-ray strength functions have been tested against the experimental values. In the case of 150Eu, in order to reproduce the experimental isomeric population cross sections, various scenarios for unknown branching ratios of certain discrete states have been discussed. The results provide constraints for the optical model parameters dedicated to this insufficiently known area of isotopes. Such cross sections for (p, γ) reactions at energies below the Coulomb barrier are valuable for p-process nucleosynthesis calculations.

  5. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. The Sm-Nd history of KREEP. [in lunar rocks

    Science.gov (United States)

    Lugmair, G. W.; Carlson, R. W.

    1978-01-01

    Sm-Nd whole rock measurements on a variety of KREEP-rich samples from different landing sites are reported. Despite a variation of Nd and Sm concentrations of almost a factor of 3, the Sm-Nd ratios, as well as the Nd-143/Nd-144 values, show an extremely close grouping. No systematic differences between samples from different landing sites are resolved. These results are taken to be indicative of a moon-wide process having been responsible for the generation of the KREEP source reservoir, 4.36 plus or minus 0.06 AE ago, as estimated from model age calculation.

  8. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  9. Octupole collectivity in the Sm isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Babilon, M. [Yale Univ., New Haven, CT (United States). Wright Nuclear Structure Lab.]|[Technische Univ. Darmstadt (Germany). Inst. fuer Kernphysik; Zamfir, N.V. [Yale Univ., New Haven, CT (United States). Wright Nuclear Structure Lab.]|[National Inst. for Physics and Nuclear Engineering, Bucharest (Romania); Kusnezov, D. [Yale Univ., New Haven, CT (United States). Sloane Physics Lab.; McCutchan, E.A. [Yale Univ., New Haven, CT (United States). Wright Nuclear Structure Lab.; Zilges, A. [Technische Univ. Darmstadt (Germany). Inst. fuer Kernphysik

    2005-08-27

    Microscopic models suggest the occurrence of strong octupole correlations in nuclei with N{approx}88. To examine the signatures of octupole correlations in this region, the spdf Interacting Boson Approximation (IBA) Model is applied to the Sm isotopes with N = 86 - 92. The effects of including multiple negative parity bosons in the basis are compared to more standard one negative parity boson calculations and are analyzed in terms of signatures for strong octupole correlations. It is found that multiple negative parity bosons are needed to describe properties at medium spin. Bands with strong octupole correlations (multiple negative parity bosons) become yrast at medium spin in {sup 148,150}Sm. This region shares some similarities with the light actinides, where strong octupole correlations were also found at medium spin. (orig.)

  10. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  11. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  12. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  13. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  14. Application of Sm0/Auxiliary and Sm0/MCln System in Organic Synthesis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yong-Min; LIU Yun-Kuia

    2001-01-01

    @@Though samarium metal itself has strong reducingpower, in most cases, certain additives are still needed when it is used as a reductant because the surface of samarium metal is inactive.Thus Sm0/auxiliary systems are proposed.The explored systems include: Sm0/I2,Sm0/TMSCl, Sm0/THF-NH4Cl (aq.), Sm0/Cp2TiCl2,Sm(Hg), Sm0/cat.KI etc.

  15. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  16. People Capability Maturity Model. SM.

    Science.gov (United States)

    1995-09-01

    with terminating employees; □ workforce reductions and outplacement ; and □ documenting and measuring the staffing process. CMU/SEI-95-MM-02 People...unit reviews and documents lessons learned from its staffing activities. Workforce reduction and other outplacement activities are conducted in...workforce reduction or other outplacement activities are planned. I Examples of criteria include: - unit’s activities and workload, - tasks to be

  17. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  18. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  19. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  20. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  1. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  2. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  3. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  4. Effect of Low Diffusion Coefficient on Eutectic Instability of Al-25 wt%Sm Alloy

    Institute of Scientific and Technical Information of China (English)

    WANG Nan

    2008-01-01

    Diffusion coefficient decides the solute diffusion length and is a critical parameter in the selection of microstructure scales and in governing microstructure transitions. Al-25 wt% Sm alloy is selected to reveal the impact of low diffusion coefficient on the eutectic instability, and the results are compared with those of Al-Cu alloys.Laser remelting experiments are performed and the transition growth velocity from eutectic to α-Al dendrite is examined. Compared with Al-Cu alloys, the eutectic instability takes place at a velocity more than one order of magnitude smaller. The theoretical calculation by the Trivedi-Magnin- Kurz (TMK) model also predicts that the eutectic will become instable at smaller growth velocity for Al-Sm alloy than Al-Cu alloy, which is ascribed to the low diffusion coefficient.

  5. Combined first-principles and model Hamiltonian study of the perovskite series R MnO 3 (R =La ,Pr ,Nd ,Sm ,Eu , and Gd )

    Science.gov (United States)

    Kováčik, Roman; Murthy, Sowmya Sathyanarayana; Quiroga, Carmen E.; Ederer, Claude; Franchini, Cesare

    2016-02-01

    We merge advanced ab initio schemes (standard density functional theory, hybrid functionals, and the G W approximation) with model Hamiltonian approaches (tight-binding and Heisenberg Hamiltonian) to study the evolution of the electronic, magnetic, and dielectric properties of the manganite family R MnO3 (R =La,Pr,Nd,Sm,Eu, and Gd) . The link between first principles and tight binding is established by downfolding the physically relevant subset of 3 d bands with eg character by means of maximally localized Wannier functions (MLWFs) using the VASP2WANNIER90 interface. The MLWFs are then used to construct a general tight-binding Hamiltonian written as a sum of the kinetic term, the Hund's rule coupling, the JT coupling, and the electron-electron interaction. The dispersion of the tight-binding (TB) eg bands at all levels are found to match closely the MLWFs. We provide a complete set of TB parameters which can serve as guidance for the interpretation of future studies based on many-body Hamiltonian approaches. In particular, we find that the Hund's rule coupling strength, the Jahn-Teller coupling strength, and the Hubbard interaction parameter U remain nearly constant for all the members of the R MnO3 series, whereas the nearest-neighbor hopping amplitudes show a monotonic attenuation as expected from the trend of the tolerance factor. Magnetic exchange interactions, computed by mapping a large set of hybrid functional total energies onto an Heisenberg Hamiltonian, clarify the origin of the A-type magnetic ordering observed in the early rare-earth manganite series as arising from a net negative out-of-plane interaction energy. The obtained exchange parameters are used to estimate the Néel temperature by means of Monte Carlo simulations. The resulting data capture well the monotonic decrease of the ordering temperature down the series from R =La to Gd, in agreement with experiments. This trend correlates well with the modulation of structural properties, in

  6. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  7. Orange and reddish-orange light emitting phosphors: Sm{sup 3+} and Sm{sup 3+}/Eu{sup 3+} doped zinc phosphate glasses

    Energy Technology Data Exchange (ETDEWEB)

    Meza-Rocha, A.N., E-mail: ameza@fis.cinvestav.mx [Departamento de Física, Universidad Autónoma Metropolitana-Iztapalapa, P.O. Box 55-534, 09340 México D.F., México (Mexico); Speghini, A. [Dipartimento di Biotecnologie, Universita di Verona and INSTM, UdR Verona, Strada Le Grazie 15, I-37314 Verona (Italy); IFAC CNR, Nello Carrara Institute of Applied Physics, MDF Lab, I-50019 Sesto Fiorentino, FI (Italy); Bettinelli, M. [Dipartimento di Biotecnologie, Universita di Verona and INSTM, UdR Verona, Strada Le Grazie 15, I-37314 Verona (Italy); Caldiño, U. [Departamento de Física, Universidad Autónoma Metropolitana-Iztapalapa, P.O. Box 55-534, 09340 México D.F., México (Mexico)

    2015-11-15

    A spectroscopy study of Sm{sup 3+} and Sm{sup 3+}/Eu{sup 3+} doped zinc phosphate glasses is performed through photoluminescence spectra and decay time profile measurements. Under Sm{sup 3+} excitation at 344 nm, the Sm{sup 3+} singly doped glass shows an orange global emission with x=0.579 and y=0.414 CIE1931 chromaticity coordinates, whereas the Sm{sup 3+}/Eu{sup 3+} co-doped sample exhibits orange overall emissions (x=0.581 and y=0.398, and x=0.595 and y=0.387) and reddish-orange overall emission (x=0.634 and y=0.355) upon excitations at 344, 360 and 393 nm, respectively. Such luminescence from the co-doped sample is originated by the simultaneous emission of Sm{sup 3+} and Eu{sup 3+}. Under Sm{sup 3+} excitation at 344 and 360 nm, the Eu{sup 3+} emission is sensitized and enhanced by Sm{sup 3+} through a non-radiative energy transfer process. The non-radiative nature was inferred from the shortening of the Sm{sup 3+} lifetime observed in the Sm{sup 3+}/Eu{sup 3+} co-doped sample. An analysis of the Sm{sup 3+} emission decay time profiles using the Inokuti–Hirayama model suggests that an electric quadrupole–quadrupole interaction into Sm–Eu clusters might dominate the energy transfer process, with an efficiency of 0.17. - Highlights: • Zinc phosphate glasses are optically activated with Sm{sup 3+}/Eu{sup 3+} (ZPOSmEu). • Non-radiative energy transfer Sm{sup 3+}→Eu{sup 3+} takes place in ZPOSmEu. • ZPOSmEu overall emission can be modulated with the excitation wavelength. • ZPOSmEu might be useful as orange/reddish-orange phosphor for UV-white LEDs.

  8. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  9. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  10. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  11. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  12. Energetics of nonequilibrium solidification in Al-Sm

    Science.gov (United States)

    Zhou, S. H.; Napolitano, R. E.

    2008-11-01

    Solution-based thermodynamic modeling, aided by first-principles calculations, is employed here to examine phase transformations in the Al-Sm binary system which may give rise to product phases that are metastable or have a composition that deviates substantially from equilibrium. In addition to describing the pure undercooled Al liquid with a two-state model that accounts for structural ordering, thermodynamic descriptions of the fcc phase, and intermediate compounds ( Al4Sm-β , Al11Sm3-α , Al3Sm-δ , and Al2Sm-σ ) are reanalyzed using special quasirandom structure and first-principles calculations. The possible phase compositions are presented over a range of temperatures using a “Baker-Cahn” analysis of the energetics of solidification and compared with reports of rapid solidification. The energetics associated with varying degrees of chemical partitioning are quantified and compared with experimental observations of the metastable Al11Sm3-α primary phase and reports of amorphous solids.

  13. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  14. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  15. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  16. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  17. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  18. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  19. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  20. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  1. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  2. The QCD/SM working group: Summary report

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Matt; Frixione, S.; Laenen, E.; De Roeck, A.; Tollefson, K.; Andersen, J.; Balazs, C.; Banfi, A.; Bernreuther, W.; Binoth, T.; Brandenburg, A.; Buttar, C.; Cao, C-H.; Cruz, A.; Dawson, I.; DelDuca, V.; Drollinger, V.; Dudko, L.; Eynck, T.; Field, R.; Grazzini, M.; Guillet, J.P.; Heinrich, G.; Huston, J.; Kauer, N.; Kidonakis, N.; Kulesza, A.; Lassila-Perini, K.; Magnea, L.; Mahmoudi, F.; Maina, E.; Maltoni, F.; Nolten, M.; Moraes, A.; Moretti, S.; Mrenna, S.; Nagy, Z.; Olness, F.; Puljak, I.; Ross, D.A.; Sabio-Vera, A.; Salam, G.P.; Sherstnev, A.; Si, Z.G.; Sjostrand, T.; Skands, P.; Thome, E.; Trocsanyi, Z.; Uwer, P.; Weinzierl, S.; Yuan, C.P.; Zanderighi,G.; Zanderighi, G.

    2004-04-09

    Among the many physics processes at TeV hadron colliders, we look most eagerly for those that display signs of the Higgs boson or of new physics. We do so however amid an abundance of processes that proceed via Standard Model (SM) and in particular Quantum Chromodynamics (QCD) interactions, and that are interesting in their own right. Good knowledge of these processes is required to help us distinguish the new from the known. Their theoretical and experimental study teaches us at the same time more about QCD/SM dynamics, and thereby enables us to further improve such distinctions. This is important because it is becoming increasingly clear that the success of finding and exploring Higgs boson physics or other New Physics at the Tevatron and LHC will depend significantly on precise understanding of QCD/SM effects for many observables. To improve predictions and deepen the study of QCD/SM signals and backgrounds was therefore the ambition for our QCD/SM working group at this Les Houches workshop. Members of the working group made significant progress towards this on a number of fronts. A variety of tools were further developed, from methods to perform higher order perturbative calculations or various types of resummation, to improvements in the modeling of underlying events and parton showers. Furthermore, various precise studies of important specific processes were conducted. A significant part of the activities in Les Houches revolved around Monte Carlo simulation of collision events. A number of contributions in this report reflect the progress made in this area. At present a large number of Monte Carlo programs exist, each written with a different purpose and employing different techniques. Discussions in Les Houches revealed the need for an accessible primer on Monte Carlo programs, featuring a listing of various codes, each with a short description, but also providing a low-level explanation of the underlying methods. This primer has now been compiled and a

  3. The QCD/SM Working Group: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    M. Dobbs et al.

    2004-08-05

    Among the many physics processes at TeV hadron colliders, we look most eagerly for those that display signs of the Higgs boson or of new physics. We do so however amid an abundance of processes that proceed via Standard Model (SM) and in particular Quantum Chromodynamics (QCD) interactions, and that are interesting in their own right. Good knowledge of these processes is required to help us distinguish the new from the known. Their theoretical and experimental study teaches us at the same time more about QCD/SM dynamics, and thereby enables us to further improve such distinctions. This is important because it is becoming increasingly clear that the success of finding and exploring Higgs boson physics or other New Physics at the Tevatron and LHC will depend significantly on precise understanding of QCD/SM effects for many observables. To improve predictions and deepen the study of QCD/SM signals and backgrounds was therefore the ambition for our QCD/SM working group at this Les Houches workshop. Members of the working group made significant progress towards this on a number of fronts. A variety of tools were further developed, from methods to perform higher order perturbative calculations or various types of resummation, to improvements in the modeling of underlying events and parton showers. Furthermore, various precise studies of important specific processes were conducted. A significant part of the activities in Les Houches revolved around Monte Carlo simulation of collision events. A number of contributions in this report reflect the progress made in this area. At present a large number of Monte Carlo programs exist, each written with a different purpose and employing different techniques. Discussions in Les Houches revealed the need for an accessible primer on Monte Carlo programs, featuring a listing of various codes, each with a short description, but also providing a low-level explanation of the underlying methods. This primer has now been compiled and a

  4. Precise predictions for Higgs physics in the next-to-minimal supersymmetric standard model (NMSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Drechsel, Peter

    2016-08-15

    Within this thesis a precise mass-prediction for the Higgs fields of the Next-to-Minimal Supersymmetric Standard Model (NMSSM) is obtained with Feynman-diagrammatic methods. The results are studied numerically for sample scenarios that are in agreement with current New Physics searches at the LHC. Furthermore a comparison between the obtained results and different calculations is performed as a first step in order to obtain an estimation for the theoretical uncertainties of the Higgs-mass prediction in the NMSSM. The precise mass-prediction includes the full NMSSM one-loop corrections supplemented with the dominant and sub-dominant two-loop corrections within the Minimal Supersymmetric Standard Model (MSSM). These include contributions at the orders O(α{sub t}α{sub s}, α{sub b}α{sub s}, α{sub t}{sup 2}, α{sub t}α{sub b}), as well as a resummation of leading and subleading logarithms from the top/scalar top sector. Higher-order corrections are essential for the NMSSM in order to provide a Higgs particle that is consistent with the available data, including the observed neutral, CP-even Higgs field with a mass of about 125 GeV. We explored the validity of the applied approximation at the two-loop level and found that it is reliable for a wide range of scenarios within the NMSSM. This is especially true for the mass of the observed (MS)SM-like Higgs field. The result of this work will be included in a future extension of the program FeynHiggs. We also compared our results with the program NMSSMCalc that also performs a Feynman-diagrammatic calculation of the Higgs-masses with a slightly different renormalization scheme. The comparison reveals that for the mass of the (MS)SM-like Higgs field the genuine NMSSM-effects induced by the choice of the renormalization scheme are by far minor compared to similar effects observed in the MSSM.

  5. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  6. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  7. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  8. The optical phonon spectrum of SmFeAsO

    OpenAIRE

    Marini, C.; Mirri, C.; Profeta, G.; Lupi, S.; Di Castro, D.; Sopracase, R.; Postorino, P.; Calvani, P.; Perucchi, A.; Massidda, S.; Tropeano, G. M.; Putti, M.; Martinelli, A.; Palenzona, A.; Dore, P.

    2008-01-01

    We measured the Raman and the Infrared phonon spectrum of SmFeAsO polycrystalline samples. We also performed Density Functional Theory calculations within the pseudopotential approximation to obtain the structural and dynamical lattice properties of both the SmFeAsO and the prototype LaFeAsO compounds. The measured Raman and Infrared phonon frequencies are well predicted by the optical phonon frequencies computed at the Gamma point, showing the capability of the employed ab-initio methods to ...

  9. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  10. Genetic, epigenetic, and gene-by-diet interaction effects underlie variation in serum lipids in a LG/JxSM/J murine model.

    Science.gov (United States)

    Lawson, Heather A; Zelle, Kathleen M; Fawcett, Gloria L; Wang, Bing; Pletscher, L Susan; Maxwell, Taylor J; Ehrich, Thomas H; Kenney-Hunt, Jane P; Wolf, Jason B; Semenkovich, Clay F; Cheverud, James M

    2010-10-01

    Variation in serum cholesterol, free-fatty acids, and triglycerides is associated with cardiovascular disease (CVD) risk factors. There is great interest in characterizing the underlying genetic architecture of these risk factors, because they vary greatly within and among human populations and between the sexes. We present results of a genome-wide scan for quantitative trait loci (QTL) affecting serum cholesterol, free-fatty acids, and triglycerides in an F(16) advanced intercross line of LG/J and SM/J (Wustl:LG,SM-G16). Half of the population was fed a high-fat diet and half was fed a relatively low-fat diet. Context-dependent genetic (additive and dominance) and epigenetic (imprinting) effects were characterized by partitioning animals into sex, diet, and sex-by-diet cohorts. Here we examine genetic, environmental, and genetic-by-environmental interactions of QTL overlapping previously identified loci associated with CVD risk factors, and we add to the serum lipid QTL landscape by identifying new loci.

  11. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  12. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  13. Search for the SM Higgs boson decaying to bb in associated production with a Z boson decaying in the invisible channel

    Directory of Open Access Journals (Sweden)

    Donato Silvio

    2013-11-01

    Full Text Available A search for the Standard Model (SM Higgs boson decaying into two b jets using associated production with a Z boson decaying into a pair of neutrinos is presented at LHCP. The CMS pp collisions data-samples of 4.7/fb of at the center-of-mass energy of 7 TeV and 19.0 /fb at the energy of 8 TeV have been analyzed. The techniques employed to discriminate signal from background are explained. An upper limit of 2.3 times the SM Higgs cross section at 95% of confidence level has been observed. The signal strength for mH = 125 GeV is 1.0 ± 0.8 times the SM prediction.

  14. Transient-field g-factor measurement of the first 2 sup + states in the N=82 nuclei sup 140 Ce, sup 142 Nd and sup 144 Sm

    Energy Technology Data Exchange (ETDEWEB)

    Bazzacco, D.; Brandolini, F.; Loewenich, K.; Pavan, P.; Rossi-Alvarez, C.; Maglione, E. (Dipt. di Fisica, Padua Univ. (Italy) Ist. Nazionale di Fisica Nucleare, Padua (Italy)); De Poli, M.; Haque, A.M.I. (Ist. Nazionale di Fisica Nucleare, Lab. Nazionale di Legnaro (Italy))

    1991-10-28

    The g-factor of the first 2{sup +} states in three stable N=82 nuclei, {sup 140}Ce, {sup 142}Nd and {sup 144}Sm, have been measured using the transient magnetic field technique. The levels under study were Coulomb excited with 110-116 MeV {sup 32}S beams and spin precession after passing a thin polarized iron foil was measured. The field strength has been checked using the first 2{sup +} state in {sup 148}Sm as internal calibration. The obtained values were 0.97(9), 0.84(7), 0.76(11) for {sup 140}Ce, {sup 142}Nd and {sup 144}Sm, respectively. These remarkably lower values with respect to shell-model predictions in a proton subspace are explained in terms of neutron core excitation by quasiparticle random-phase-approximation calculations. (orig.).

  15. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  16. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  17. r-Sm14 - pRSETA efficacy in experimental animals

    Directory of Open Access Journals (Sweden)

    Ramos Celso Raul Romero

    2001-01-01

    Full Text Available Previous studies carried out with Sm14 in experimental vaccination against Schistosoma mansoni or Fasciola hepatica infections were performed with recombinant Sm14 (rSm14 produced in Escherichia coli by the pGEMEX system (Promega. The rSm14 was expressed as a 40 kDa fusion protein with the major bacteriophage T7 capsid protein. Vaccination experiments with this rSm14 in animal models resulted in consistent high protective activity against S. mansoni cercariae challenge and enabled rSm14 to be included among the vaccine antigens endorsed by the World Health Organization for phase I/II clinical trials. Since the preparation of pGEMEX based rSm14 is time consuming and results in low yield for large scale production, we have tested other E. coli expression systems which would be more suitable for scale up and downstream processing. We expressed two different 6XHis-tagged Sm14 fusion proteins in a T7 promoter based plasmids. The 6XHis-tag fusions allowed rapid purification of the recombinant proteins through a Ni+2-charged resin. The resulted recombinant 18 and 16 kDa proteins were recognized by anti-Sm14 antibodies and also by antiserum against adult S. mansoni soluble secreted/excreted proteins in Western-Blot. Both proteins were also protective against S. mansoni cercariae infection to the same extent as the rSm14 expressed by the pGEMEX system.

  18. Production of 92Nb, 92Mo, and 146Sm in the gamma-process in SNIa

    CERN Document Server

    Rauscher, T; Gallino, R; Nishimura, N; Hirschi, R

    2014-01-01

    The knowledge of the production of extinct radioactivities like 92Nb and 146Sm by photodisintegration processes in ccSN and SNIa models is essential for interpreting abundances in meteoritic material and for Galactic Chemical Evolution (GCE). The 92Mo/92Nb and 146Sm/144Sm ratios provide constraints for GCE and production sites. We present results for SNIa with emphasis on nuclear uncertainties.

  19. Dark Matter and Color Octets Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Krnjaic, Gordan Zdenko [Johns Hopkins Univ., Baltimore, MD (United States)

    2012-07-01

    Although the Standard Model (SM) of particles and interactions has survived forty years of experimental tests, it does not provide a complete description of nature. From cosmological and astrophysical observations, it is now clear that the majority of matter in the universe is not baryonic and interacts very weakly (if at all) via non-gravitational forces. The SM does not provide a dark matter candidate, so new particles must be introduced. Furthermore, recent Tevatron results suggest that SM predictions for benchmark collider observables are in tension with experimental observations. In this thesis, we will propose extensions to the SM that address each of these issues.

  20. Smøremekanismer ved mekanisk formgivning

    DEFF Research Database (Denmark)

    Bech, Jakob Ilsted; Bay, Niels; Eriksen, Morten

    1997-01-01

    I dette arbejde undersøges den Mikro Plasto Hydrodynamiske smøremekanisme som optræder ved plastisk formgivning, når hydrostatisk indfanget smøremiddel i overfladelommer trænger ud i den omgivende kontakt, og skaber lokale smørefilm mellem emne og værktøj.Smøremekanismen observeres og videofilmes...

  1. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  2. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  3. Associated productions of the new gauge boson BH in the Littlest Higgs model with a SM gauge boson via e+e- collision

    Institute of Scientific and Technical Information of China (English)

    WANG Xue-Lei; ZENG Qing-Guo; JIN Zhen-Lan; LIU Su-Zhen

    2008-01-01

    With the high energy and luminosity, the planned ILC has the considerable capability to probe the new heavy particles predicted by the new physics models. In this paper, we study the potential to discover the lightest new gauge boson BH of the Littlest Higgs model via the processes e+e- →γ(Z)BH at the ILC. The results show that the production rates of these two processes are large enough to detect BH in a wide range of the parameter spaces, specially for the process e+e- →γBH. Furthermore, there exist some decay modes for BH which can provide the typical signal and clean backgrounds. Therefore, the new gauge boson BH should be observable via these production processes with the running of the ILC if it exist.

  4. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  5. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  6. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  7. Rainfall estimation from soil moisture data: crash test for SM2RAIN algorithm

    Science.gov (United States)

    Brocca, Luca; Albergel, Clement; Massari, Christian; Ciabatta, Luca; Moramarco, Tommaso; de Rosnay, Patricia

    2015-04-01

    Soil moisture governs the partitioning of mass and energy fluxes between the land surface and the atmosphere and, hence, it represents a key variable for many applications in hydrology and earth science. In recent years, it was demonstrated that soil moisture observations from ground and satellite sensors contain important information useful for improving rainfall estimation. Indeed, soil moisture data have been used for correcting rainfall estimates from state-of-the-art satellite sensors (e.g. Crow et al., 2011), and also for improving flood prediction through a dual data assimilation approach (e.g. Massari et al., 2014; Chen et al., 2014). Brocca et al. (2013; 2014) developed a simple algorithm, called SM2RAIN, which allows estimating rainfall directly from soil moisture data. SM2RAIN has been applied successfully to in situ and satellite observations. Specifically, by using three satellite soil moisture products from ASCAT (Advanced SCATterometer), AMSR-E (Advanced Microwave Scanning Radiometer for Earth Observation) and SMOS (Soil Moisture and Ocean Salinity); it was found that the SM2RAIN-derived rainfall products are as accurate as state-of-the-art products, e.g., the real-time version of the TRMM (Tropical Rainfall Measuring Mission) product. Notwithstanding these promising results, a detailed study investigating the physical basis of the SM2RAIN algorithm, its range of applicability and its limitations on a global scale has still to be carried out. In this study, we carried out a crash test for SM2RAIN algorithm on a global scale by performing a synthetic experiment. Specifically, modelled soil moisture data are obtained from HTESSEL model (Hydrology Tiled ECMWF Scheme for Surface Exchanges over Land) forced by ERA-Interim near-surface meteorology. Afterwards, the modelled soil moisture data are used as input into SM2RAIN algorithm for testing weather or not the resulting rainfall estimates are able to reproduce ERA-Interim rainfall data. Correlation, root

  8. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  9. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  10. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  11. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  12. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  13. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies

  14. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  15. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  16. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  17. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  18. Prediction Model of Sewing Technical Condition by Grey Neural Network

    Institute of Scientific and Technical Information of China (English)

    DONG Ying; FANG Fang; ZHANG Wei-yuan

    2007-01-01

    The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.

  19. Active diagnosis of hybrid systems - A model predictive approach

    OpenAIRE

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...

  20. Evaluation of Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  1. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  2. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  3. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  4. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  5. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  6. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  7. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  8. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  9. The importance of context to the genetic architecture of diabetes-related traits is revealed in a genome-wide scan of a LG/J × SM/J murine model.

    Science.gov (United States)

    Lawson, Heather A; Lee, Arthur; Fawcett, Gloria L; Wang, Bing; Pletscher, L Susan; Maxwell, Taylor J; Ehrich, Thomas H; Kenney-Hunt, Jane P; Wolf, Jason B; Semenkovich, Clay F; Cheverud, James M

    2011-04-01

    Variations in diabetic phenotypes are caused by complex interactions of genetic effects, environmental factors, and the interplay between the two. We tease apart these complex interactions by examining genome-wide genetic and epigenetic effects on diabetes-related traits among different sex, diet, and sex-by-diet cohorts in a Mus musculus model. We conducted a genome-wide scan for quantitative trait loci that affect serum glucose and insulin levels and response to glucose stress in an F(16) Advanced Intercross Line of the LG/J and SM/J intercross (Wustl:LG,SM-G16). Half of each sibship was fed a high-fat diet and half was fed a relatively low-fat diet. Context-dependent genetic (additive and dominance) and epigenetic (parent-of-origin imprinting) effects were characterized by partitioning animals into sex, diet, and sex-by-diet cohorts. We found that different cohorts often have unique genetic effects at the same loci, and that genetic signals can be masked or erroneously assigned to specific cohorts if they are not considered individually. Our data demonstrate that the effects of genes on complex trait variation are highly context-dependent and that the same genomic sequence can affect traits differently depending on an individual's sex and/or dietary environment. Our results have important implications for studies of complex traits in humans.

  10. High-precision predictions for the light CP-even Higgs boson mass of the minimal supersymmetric standard model.

    Science.gov (United States)

    Hahn, T; Heinemeyer, S; Hollik, W; Rzehak, H; Weiglein, G

    2014-04-11

    For the interpretation of the signal discovered in the Higgs searches at the LHC it will be crucial in particular to discriminate between the minimal Higgs sector realized in the standard model (SM) and its most commonly studied extension, the minimal supersymmetric standard model (MSSM). The measured mass value, having already reached the level of a precision observable with an experimental accuracy of about 500 MeV, plays an important role in this context. In the MSSM the mass of the light CP-even Higgs boson, Mh, can directly be predicted from the other parameters of the model. The accuracy of this prediction should at least match the one of the experimental result. The relatively high mass value of about 126 GeV has led to many investigations where the scalar top quarks are in the multi-TeV range. We improve the prediction for Mh in the MSSM by combining the existing fixed-order result, comprising the full one-loop and leading and subleading two-loop corrections, with a resummation of the leading and subleading logarithmic contributions from the scalar top sector to all orders. In this way for the first time a high-precision prediction for the mass of the light CP-even Higgs boson in the MSSM is possible all the way up to the multi-TeV region of the relevant supersymmetric particles. The results are included in the code FEYNHIGGS.

  11. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  12. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  13. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  14. The Cloud2SM Project

    Science.gov (United States)

    Crinière, Antoine; Dumoulin, Jean; Mevel, Laurent; Andrade-Barosso, Guillermo; Simonin, Matthieu

    2015-04-01

    From the past decades the monitoring of civil engineering structure became a major field of research and development process in the domains of modelling and integrated instrumentation. This increasing of interest can be attributed in part to the need of controlling the aging of such structures and on the other hand to the need to optimize maintenance costs. From this standpoint the project Cloud2SM (Cloud architecture design for Structural Monitoring with in-line Sensors and Models tasking), has been launched to develop a robust information system able to assess the long term monitoring of civil engineering structures as well as interfacing various sensors and data. The specificity of such architecture is to be based on the notion of data processing through physical or statistical models. Thus the data processing, whether material or mathematical, can be seen here as a resource of the main architecture. The project can be divided in various items: -The sensors and their measurement process: Those items provide data to the main architecture and can embed storage or computational resources. Dependent of onboard capacity and the amount of data generated it can be distinguished heavy and light sensors. - The storage resources: Based on the cloud concept this resource can store at least two types of data, raw data and processed ones. - The computational resources: This item includes embedded "pseudo real time" resources as the dedicated computer cluster or computational resources. - The models: Used for the conversion of raw data to meaningful data. Those types of resources inform the system of their needs they can be seen as independents blocks of the system. - The user interface: This item can be divided in various HMI to assess maintaining operation on the sensors or pop-up some information to the user. - The demonstrators: The structures themselves. This project follows previous research works initiated in the European project ISTIMES [1]. It includes the infrared

  15. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  16. MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION

    Directory of Open Access Journals (Sweden)

    Priyanka H U

    2016-09-01

    Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.

  17. The regional prediction model of PM10 concentrations for Turkey

    Science.gov (United States)

    Güler, Nevin; Güneri İşçi, Öznur

    2016-11-01

    This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.

  18. Standard Missile-6 (SM-6)

    Science.gov (United States)

    2016-12-01

    Cost Reporting U.S. - United States USD(AT&L) - Under Secretary of Defense (Acquisition, Technology and Logistics ) SM-6 December 2015 SAR March 17...Description 6 Executive Summary 7 Threshold Breaches 8 Schedule 9 Performance 11 Track to Budget 12 Cost and Funding 13 Low...Rate Initial Production 20 Foreign Military Sales 21 Nuclear Costs 21 Unit Cost 22 Cost Variance 25 Contracts 28 Deliveries and

  19. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  20. Absolute photoneutron cross sections of Sm isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Gheorghe, I.; Glodariu, T. [National Institute for Physics and Nuclear Engineering Horia Hulubei, str. Atomistilor nr. 407 (Romania); Utsunomiya, H. [Department of Physics, Konan University, Okamoto 8-9-1, Higashinada, Kobe 658-8501 (Japan); Filipescu, D. [Extreme Light Infrastructure - Nuclear Physics, str. Atomistilor nr. 407, Bucharest-Magurele, P.O.BOX MG6 and National Institute for Physics and Nuclear Engineering Horia Hulubei, str. Atomistilor nr. 407 (Romania); Nyhus, H.-T.; Renstrom, T. [Department of Physics, University of Oslo, N-0316 Oslo (Norway); Tesileanu, O. [Extreme Light Infrastructure - Nuclear Physics, str. Atomistilor nr. 407, Bucharest-Magurele, P.O.BOX MG6 (Romania); Shima, T.; Takahisa, K. [Research Center for Nuclear Physics, Osaka University, Suita, Osaka 567-0047 (Japan); Miyamoto, S. [Laboratory of Advanced Science and Technology for Industry, University of Hyogo, 3-1-2 Kouto, Kamigori, Hyogo 678-1205 (Japan)

    2015-02-24

    Photoneutron cross sections for seven samarium isotopes, {sup 144}Sm, {sup 147}Sm, {sup 148}Sm, {sup 149}Sm, {sup 150}Sm, {sup 152}Sm and {sup 154}Sm, have been investigated near neutron emission threshold using quasimonochromatic laser-Compton scattering γ-rays produced at the synchrotron radiation facility NewSUBARU. The results are important for nuclear astrophysics calculations and also for probing γ-ray strength functions in the vicinity of neutron threshold. Here we describe the neutron detection system and we discuss the related data analysis and the necessary method improvements for adapting the current experimental method to the working parameters of the future Gamma Beam System of Extreme Light Infrastructure - Nuclear Physics facility.

  1. Nonlinear model predictive control of a packed distillation column

    Energy Technology Data Exchange (ETDEWEB)

    Patwardhan, A.A.; Edgar, T.F. (Univ. of Texas, Austin, TX (United States). Dept. of Chemical Engineering)

    1993-10-01

    A rigorous dynamic model based on fundamental chemical engineering principles was formulated for a packed distillation column separating a mixture of cyclohexane and n-heptane. This model was simplified to a form suitable for use in on-line model predictive control calculations. A packed distillation column was operated at several operating conditions to estimate two unknown model parameters in the rigorous and simplified models. The actual column response to step changes in the feed rate, distillate rate, and reboiler duty agreed well with dynamic model predictions. One unusual characteristic observed was that the packed column exhibited gain-sign changes, which are very difficult to treat using conventional linear feedback control. Nonlinear model predictive control was used to control the distillation column at an operating condition where the process gain changed sign. An on-line, nonlinear model-based scheme was used to estimate unknown/time-varying model parameters.

  2. Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant

    Institute of Scientific and Technical Information of China (English)

    CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian

    2007-01-01

    This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.

  3. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  4. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel;

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...... day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results Our analyses show significant differences between predictions from different models......, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each...

  5. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan;

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re......-search of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset....... The conferred results show that the prediction errors can be decreased, while the computation time is reduced....

  6. Improving Environmental Model Calibration and Prediction

    Science.gov (United States)

    2011-01-18

    groundwater model calibration. Adv. Water Resour., 29(4):605–623, 2006. [9] B.E. Skahill, J.S. Baggett, S. Frankenstein , and C.W. Downer. More efficient...of Hydrology, Environmental Modelling & Software, or Water Resources Research). Skahill, B., Baggett, J., Frankenstein , S., and Downer, C.W. (2009

  7. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    load shifting capabilities of the units that adapts to the given price predictions. We furthermore evaluated control performance in terms of economic savings for different control strategies and forecasts. Chapter 5 describes and compares the proposed large-scale Aggregator control strategies....... Aggregators are assumed to play an important role in the future Smart Grid and coordinate a large portfolio of units. The developed economic MPC controllers interfaces each unit directly to an Aggregator. We developed several MPC-based aggregation strategies that coordinates the global behavior of a portfolio...

  8. Combining logistic regression and neural networks to create predictive models.

    OpenAIRE

    Spackman, K. A.

    1992-01-01

    Neural networks are being used widely in medicine and other areas to create predictive models from data. The statistical method that most closely parallels neural networks is logistic regression. This paper outlines some ways in which neural networks and logistic regression are similar, shows how a small modification of logistic regression can be used in the training of neural network models, and illustrates the use of this modification for variable selection and predictive model building wit...

  9. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  10. A thermodynamic model to predict wax formation in petroleum fluids

    Energy Technology Data Exchange (ETDEWEB)

    Coutinho, J.A.P. [Universidade de Aveiro (Portugal). Dept. de Quimica. Centro de Investigacao em Quimica]. E-mail: jcoutinho@dq.ua.pt; Pauly, J.; Daridon, J.L. [Universite de Pau et des Pays de l' Adour, Pau (France). Lab. des Fluides Complexes

    2001-12-01

    Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G{sup E} model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data. (author)

  11. A THERMODYNAMIC MODEL TO PREDICT WAX FORMATION IN PETROLEUM FLUIDS

    Directory of Open Access Journals (Sweden)

    J.A.P. Coutinho

    2001-12-01

    Full Text Available Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G E model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data.

  12. 曼氏血吸虫己糖激酶(SmHK)蛋白质结构与功能的生物信息学预测%Bioinformatics Prediction of the Structure and Function of Hexokinase from Schistosoma mansoni

    Institute of Scientific and Technical Information of China (English)

    吕刚; 赵世勇; 李静晶; 范志刚; 芦亚君

    2010-01-01

    目的 应用生物信息学技术预测曼氏血吸虫己糖激酶(SmHK)的结构和功能,为进一步功能研究提供信息.方法 从GenBank获取SmHK及其他物种HK全长cDNA序列及氨基酸序列,应用NCBI、Expasy等在线生物信息学网站及VectorNTI软件包,对所获氨基酸序列的保守功能域及基序、蛋白质理化参数、亚细胞定位、亲水性、B细胞线性表位、二级结构及拓扑结构、三级结构建模分析及预测.结果 SmHK编码451氨基酸残基,理论分子量为50 446.01 Da,具有完整HK-1及HK-2保守功能域.与结构和功能有关的位点高度保守,与宿主(人、鼠)的同源性为30%,与人等脊椎动物有较近的进化关系;有多个潜在的抗原表位、多个磷酸化位点及1个跨膜结构.三级结构分子建模显示该蛋白两基团间有一裂隙,葡萄糖、ATP结合位点及ATP催化区位于该裂隙中或周围,跨膜区与其两端的碱性氨基酸形成一阴离子通道.结论 SmHK与结构和功能相关的位点高度保守,与宿主有较近的进化关系,推测该蛋白可能通过跨膜区锚定在线粒体外膜上,主要功能位点位于蛋白裂隙中或周围,多个磷酸化位点说明其参与多种细胞功能的调节,在调节能量代谢中起重要作用,是潜在的疫苗候选分子和药物作用靶标.

  13. A systematic review of predictive modeling for bronchiolitis.

    Science.gov (United States)

    Luo, Gang; Nkoy, Flory L; Gesteland, Per H; Glasgow, Tiffany S; Stone, Bryan L

    2014-10-01

    Bronchiolitis is the most common cause of illness leading to hospitalization in young children. At present, many bronchiolitis management decisions are made subjectively, leading to significant practice variation among hospitals and physicians caring for children with bronchiolitis. To standardize care for bronchiolitis, researchers have proposed various models to predict the disease course to help determine a proper management plan. This paper reviews the existing state of the art of predictive modeling for bronchiolitis. Predictive modeling for respiratory syncytial virus (RSV) infection is covered whenever appropriate, as RSV accounts for about 70% of bronchiolitis cases. A systematic review was conducted through a PubMed search up to April 25, 2014. The literature on predictive modeling for bronchiolitis was retrieved using a comprehensive search query, which was developed through an iterative process. Search results were limited to human subjects, the English language, and children (birth to 18 years). The literature search returned 2312 references in total. After manual review, 168 of these references were determined to be relevant and are discussed in this paper. We identify several limitations and open problems in predictive modeling for bronchiolitis, and provide some preliminary thoughts on how to address them, with the hope to stimulate future research in this domain. Many problems remain open in predictive modeling for bronchiolitis. Future studies will need to address them to achieve optimal predictive models. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Climate predictability and prediction skill on seasonal time scales over South America from CHFP models

    Science.gov (United States)

    Osman, Marisol; Vera, C. S.

    2016-11-01

    This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to

  15. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  16. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  17. Models for short term malaria prediction in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Galappaththy Gawrie NL

    2008-05-01

    Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.

  18. Aggregate driver model to enable predictable behaviour

    Science.gov (United States)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  19. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  20. EPR of Sm{sup 3+} in BaFCl single crystals

    Energy Technology Data Exchange (ETDEWEB)

    Falin, M [Department of Physical Chemistry, University of Geneva, Geneva (Switzerland); Bill, H [Department of Physical Chemistry, University of Geneva, Geneva (Switzerland); Lovy, D [Department of Physical Chemistry, University of Geneva, Geneva (Switzerland)

    2004-03-03

    BaFCl single crystals doped with Sm{sup 3+} ions were studied by using the EPR method. Several types of paramagnetic Sm{sup 3+} centres were found. The parameters of the corresponding spin Hamiltonians were determined. Structural models and ground states of the observed centres are proposed.

  1. Noncausal spatial prediction filtering based on an ARMA model

    Institute of Scientific and Technical Information of China (English)

    Liu Zhipeng; Chen Xiaohong; Li Jingye

    2009-01-01

    Conventional f-x prediction filtering methods are based on an autoregressive model. The error section is first computed as a source noise but is removed as additive noise to obtain the signal, which results in an assumption inconsistency before and after filtering. In this paper, an autoregressive, moving-average model is employed to avoid the model inconsistency. Based on the ARMA model, a noncasual prediction filter is computed and a self-deconvolved projection filter is used for estimating additive noise in order to suppress random noise. The 1-D ARMA model is also extended to the 2-D spatial domain, which is the basis for noncasual spatial prediction filtering for random noise attenuation on 3-D seismic data. Synthetic and field data processing indicate this method can suppress random noise more effectively and preserve the signal simultaneously and does much better than other conventional prediction filtering methods.

  2. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  3. Two Predictions of a Compound Cue Model of Priming

    OpenAIRE

    Walenski, Matthew

    2003-01-01

    This paper examines two predictions of the compound cue model of priming (Ratcliff and McKoon, 1988). While this model has been used to provide an account of a wide range of priming effects, it may not actually predict priming in these or other circumstances. In order to predict priming effects, the compound cue model relies on an assumption that all items have the same number of associates. This assumption may be true in only a restricted number of cases. This paper demonstrates that when th...

  4. Purification of the spliced leader ribonucleoprotein particle from Leptomonas collosoma revealed the existence of an Sm protein in trypanosomes. Cloning the SmE homologue.

    Science.gov (United States)

    Goncharov, I; Palfi, Z; Bindereif, A; Michaeli, S

    1999-04-30

    Trans-splicing in trypanosomes involves the addition of a common spliced leader (SL) sequence, which is derived from a small RNA, the SL RNA, to all mRNA precursors. The SL RNA is present in the cell in the form of a ribonucleoprotein, the SL RNP. Using conventional chromatography and affinity selection with 2'-O-methylated RNA oligonucleotides at high ionic strength, five proteins of 70, 16, 13, 12, and 8 kDa were co-selected with the SL RNA from Leptomonas collosoma, representing the SL RNP core particle. Under conditions of lower ionic strength, additional proteins of 28 and 20 kDa were revealed. On the basis of peptide sequences, the gene coding for a protein with a predicted molecular weight of 11.9 kDa was cloned and identified as homologue of the cis-spliceosomal SmE. The protein carries the Sm motifs 1 and 2 characteristic of Sm antigens that bind to all known cis-spliceosomal uridylic acid-rich small nuclear RNAs (U snRNAs), suggesting the existence of Sm proteins in trypanosomes. This finding is of special interest because trypanosome snRNPs are the only snRNPs examined to date that are not recognized by anti-Sm antibodies. Because of the early divergence of trypanosomes from the eukaryotic lineage, the trypanosome SmE protein represents one of the primordial Sm proteins in nature.

  5. Aerodynamic Noise Prediction Using stochastic Turbulence Modeling

    Directory of Open Access Journals (Sweden)

    Arash Ahmadzadegan

    2008-01-01

    Full Text Available Amongst many approaches to determine the sound propagated from turbulent flows, hybrid methods, in which the turbulent noise source field is computed or modeled separately from the far field calculation, are frequently used. For basic estimation of sound propagation, less computationally intensive methods can be developed using stochastic models of the turbulent fluctuations (turbulent noise source field. A simple and easy to use stochastic model for generating turbulent velocity fluctuations called continuous filter white noise (CFWN model was used. This method based on the use of classical Langevian-equation to model the details of fluctuating field superimposed on averaged computed quantities. The resulting sound field due to the generated unsteady flow field was evaluated using Lighthill's acoustic analogy. Volume integral method used for evaluating the acoustic analogy. This formulation presents an advantage, as it confers the possibility to determine separately the contribution of the different integral terms and also integration regions to the radiated acoustic pressure. Our results validated by comparing the directivity and the overall sound pressure level (OSPL magnitudes with the available experimental results. Numerical results showed reasonable agreement with the experiments, both in maximum directivity and magnitude of the OSPL. This method presents a very suitable tool for the noise calculation of different engineering problems in early stages of the design process where rough estimates using cheaper methods are needed for different geometries.

  6. A Predictive Model of High Shear Thrombus Growth.

    Science.gov (United States)

    Mehrabadi, Marmar; Casa, Lauren D C; Aidun, Cyrus K; Ku, David N

    2016-08-01

    The ability to predict the timescale of thrombotic occlusion in stenotic vessels may improve patient risk assessment for thrombotic events. In blood contacting devices, thrombosis predictions can lead to improved designs to minimize thrombotic risks. We have developed and validated a model of high shear thrombosis based on empirical correlations between thrombus growth and shear rate. A mathematical model was developed to predict the growth of thrombus based on the hemodynamic shear rate. The model predicts thrombus deposition based on initial geometric and fluid mechanic conditions, which are updated throughout the simulation to reflect the changing lumen dimensions. The model was validated by comparing predictions against actual thrombus growth in six separate in vitro experiments: stenotic glass capillary tubes (diameter = 345 µm) at three shear rates, the PFA-100(®) system, two microfluidic channel dimensions (heights = 300 and 82 µm), and a stenotic aortic graft (diameter = 5.5 mm). Comparison of the predicted occlusion times to experimental results shows excellent agreement. The model is also applied to a clinical angiography image to illustrate the time course of thrombosis in a stenotic carotid artery after plaque cap rupture. Our model can accurately predict thrombotic occlusion time over a wide range of hemodynamic conditions.

  7. The application of modeling and prediction with MRA wavelet network

    Institute of Scientific and Technical Information of China (English)

    LU Shu-ping; YANG Xue-jing; ZHAO Xi-ren

    2004-01-01

    As there are lots of non-linear systems in the real engineering, it is very important to do more researches on the modeling and prediction of non-linear systems. Based on the multi-resolution analysis (MRA) of wavelet theory, this paper combined the wavelet theory with neural network and established a MRA wavelet network with the scaling function and wavelet function as its neurons. From the analysis in the frequency domain, the results indicated that MRA wavelet network was better than other wavelet networks in the ability of approaching to the signals. An essential research was carried out on modeling and prediction with MRA wavelet network in the non-linear system. Using the lengthwise sway data received from the experiment of ship model, a model of offline prediction was established and was applied to the short-time prediction of ship motion. The simulation results indicated that the forecasting model improved the prediction precision effectively, lengthened the forecasting time and had a better prediction results than that of AR linear model.The research indicates that it is feasible to use the MRA wavelet network in the short -time prediction of ship motion.

  8. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  9. MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model

    Science.gov (United States)

    Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping

    2016-09-01

    This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.

  10. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  11. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  12. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  13. Prediction model for spring dust weather frequency in North China

    Institute of Scientific and Technical Information of China (English)

    LANG XianMei

    2008-01-01

    It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, I.e. Model-Ⅰ and model-Ⅱ, are then set up respectively based on observed climate data and the 32-year (1970--2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-Ⅰ, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-Ⅱ, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-Ⅰ. The model-Ⅱ can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-Ⅰ's one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-Ⅱ, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.

  14. Prediction model for spring dust weather frequency in North China

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, i.e. model-I and model-II, are then set up respectively based on observed climate data and the 32-year (1970 -2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-I, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-II, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-I. The model-II can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-I’s one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-II, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.

  15. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik;

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored...... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  16. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  17. An evaluation of mathematical models for predicting skin permeability.

    Science.gov (United States)

    Lian, Guoping; Chen, Longjian; Han, Lujia

    2008-01-01

    A number of mathematical models have been proposed for predicting skin permeability, mostly empirical and very few are deterministic. Early empirical models use simple lipophilicity parameters. The recent trend is to use more complicated molecular structure descriptors. There has been much debate on which models best predict skin permeability. This article evaluates various mathematical models using a comprehensive experimental dataset of skin permeability for 124 chemical compounds compiled from various sources. Of the seven models compared, the deterministic model of Mitragotri gives the best prediction. The simple quantitative structure permeability relationships (QSPR) model of Potts and Guy gives the second best prediction. The two models have many features in common. Both assume the lipid matrix as the pathway of transdermal permeation. Both use octanol-water partition coefficient and molecular size. Even the mathematical formulae are similar. All other empirical QSPR models that use more complicated molecular structure descriptors fail to provide satisfactory prediction. The molecular structure descriptors in the more complicated QSPR models are empirically related to skin permeation. The mechanism on how these descriptors affect transdermal permeation is not clear. Mathematically it is an ill-defined approach to use many colinearly related parameters rather than fewer independent parameters in multi-linear regression.

  18. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  19. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  20. Prediction of bypass transition with differential Reynolds stress models

    NARCIS (Netherlands)

    Westin, K.J.A.; Henkes, R.A.W.M.

    1998-01-01

    Boundary layer transition induced by high levels of free stream turbulence (FSl), so called bypass transition, can not be predicted with conventional stability calculations (e.g. the en-method). The use of turbulence models for transition prediction has shown some success for this type of flows, and

  1. Prediction Models of Free-Field Vibrations from Railway Traffic

    DEFF Research Database (Denmark)

    Malmborg, Jens; Persson, Kent; Persson, Peter

    2017-01-01

    and railways close to where people work and live. Annoyance from traffic-induced vibrations and noise is expected to be a growing issue. To predict the level of vibration and noise in buildings caused by railway and road traffic, calculation models are needed. In the present paper, a simplified prediction...

  2. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...

  3. Space Weather: Measurements, Models and Predictions

    Science.gov (United States)

    2014-03-21

    and record high levels of cosmic ray flux. There were broad-ranging terrestrial responses to this inactivity of the Sun. BC was involved in the...techniques for converting from one coordinate system (e.g., the invariant coordinate system used for the model) to another (e.g., the latitude- radius

  4. Monotone models for prediction in data mining

    NARCIS (Netherlands)

    Velikova, M.V.

    2006-01-01

    This dissertation studies the incorporation of monotonicity constraints as a type of domain knowledge into a data mining process. Monotonicity constraints are enforced at two stages¿data preparation and data modeling. The main contributions of the research are a novel procedure to test the degree of

  5. Predicting Magazine Audiences with a Loglinear Model.

    Science.gov (United States)

    1987-07-01

    important use of e.d. estimates is in media selection ( Aaker 1975; Lee 1962, 1963; Little and Lodish 1969). All advertising campaigns have a budget. It...N.Z. Listener 6061 39.0 4 0 22 References Aaker , D.A. (1975), "ADMOD:An Advertising Decision Model," Journal of Marketing Research, February, 37-45

  6. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  7. Better predictions when models are wrong or underspecified

    NARCIS (Netherlands)

    Ommen, Matthijs van

    2015-01-01

    Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or undersp

  8. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  9. A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction.

    Science.gov (United States)

    Song, Jingwei; He, Jiaying

    2014-08-01

    In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square-support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models.

  10. Using Pareto points for model identification in predictive toxicology.

    Science.gov (United States)

    Palczewska, Anna; Neagu, Daniel; Ridley, Mick

    2013-03-22

    : Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.

  11. A Composite Model Predictive Control Strategy for Furnaces

    Institute of Scientific and Technical Information of China (English)

    Hao Zang; Hongguang Li; Jingwen Huang; Jia Wang

    2014-01-01

    Tube furnaces are essential and primary energy intensive facilities in petrochemical plants. Operational optimi-zation of furnaces could not only help to improve product quality but also benefit to reduce energy consumption and exhaust emission. Inspired by this idea, this paper presents a composite model predictive control (CMPC) strategy, which, taking advantage of distributed model predictive control architectures, combines tracking nonlinear model predictive control and economic nonlinear model predictive control metrics to keep process running smoothly and optimize operational conditions. The control ers connected with two kinds of communi-cation networks are easy to organize and maintain, and stable to process interferences. A fast solution algorithm combining interior point solvers and Newton's method is accommodated to the CMPC realization, with reason-able CPU computing time and suitable online applications. Simulation for industrial case demonstrates that the proposed approach can ensure stable operations of furnaces, improve heat efficiency, and reduce the emission effectively.

  12. Submission Form for Peer-Reviewed Cancer Risk Prediction Models

    Science.gov (United States)

    If you have information about a peer-reviewd cancer risk prediction model that you would like to be considered for inclusion on this list, submit as much information as possible through the form on this page.

  13. ACCIDENT PREDICTION MODELS FOR UNSIGNALISED URBAN JUNCTIONS IN GHANA

    Directory of Open Access Journals (Sweden)

    Mohammed SALIFU, MSc., PhD, MIHT, MGhIE

    2004-01-01

    The accident prediction models developed have a potentially wide area of application and their systematic use is likely to improve considerably the quality and delivery of the engineering aspects of accident mitigation and prevention in Ghana.

  14. Using a Prediction Model to Manage Cyber Security Threats

    National Research Council Canada - National Science Library

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    .... The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security...

  15. Development of a multi-year climate prediction model | Alexander ...

    African Journals Online (AJOL)

    Development of a multi-year climate prediction model. ... The available water resources in Southern Africa are rapidly approaching the limits of economic exploitation. ... that could be attributed to climate change arising from human activities.

  16. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  17. Haskell financial data modeling and predictive analytics

    CERN Document Server

    Ryzhov, Pavel

    2013-01-01

    This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.

  18. Mesoscale Wind Predictions for Wave Model Evaluation

    Science.gov (United States)

    2016-06-07

    N0001400WX20041(B) http://www.nrlmry.navy.mil LONG TERM GOALS The long-term goal is to demonstrate the significance and importance of high...ocean waves by an appropriate wave model. OBJECTIVES The main objectives of this project are to: 1. Build the infrastructure to generate the...temperature for all COAMPS grids at the resolution of each of these grids. These analyses are important for the proper 2 specification of the lower

  19. Modeling Seizure Self-Prediction: An E-Diary Study

    Science.gov (United States)

    Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.

    2013-01-01

    Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898

  20. Sierra/SM theory manual.

    Energy Technology Data Exchange (ETDEWEB)

    Crane, Nathan Karl

    2013-07-01

    Presented in this document are the theoretical aspects of capabilities contained in the Sierra/SM code. This manuscript serves as an ideal starting point for understanding the theoretical foundations of the code. For a comprehensive study of these capabilities, the reader is encouraged to explore the many references to scientific articles and textbooks contained in this manual. It is important to point out that some capabilities are still in development and may not be presented in this document. Further updates to this manuscript will be made as these capabilites come closer to production level.

  1. Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.

    Science.gov (United States)

    Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei

    2015-01-01

    Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.

  2. Model predictive control of P-time event graphs

    Science.gov (United States)

    Hamri, H.; Kara, R.; Amari, S.

    2016-12-01

    This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.

  3. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  4. Using connectome-based predictive modeling to predict individual behavior from brain connectivity.

    Science.gov (United States)

    Shen, Xilin; Finn, Emily S; Scheinost, Dustin; Rosenberg, Monica D; Chun, Marvin M; Papademetris, Xenophon; Constable, R Todd

    2017-03-01

    Neuroimaging is a fast-developing research area in which anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale data sets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: (i) feature selection, (ii) feature summarization, (iii) model building, and (iv) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a considerable amount of the variance in these measures. It has been demonstrated that the CPM protocol performs as well as or better than many of the existing approaches in brain-behavior prediction. As CPM focuses on linear modeling and a purely data-driven approach, neuroscientists with limited or no experience in machine learning or optimization will find it easy to implement these protocols. Depending on the volume of data to be processed, the protocol can take 10-100 min for model building, 1-48 h for permutation testing, and 10-20 min for visualization of results.

  5. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  6. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.|info:eu-repo/dai/nl/073087998

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  7. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  8. Catalytic cracking models developed for predictive control purposes

    Directory of Open Access Journals (Sweden)

    Dag Ljungqvist

    1993-04-01

    Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.

  9. Towards the ultimate SM fit to close in on Higgs physics

    Energy Technology Data Exchange (ETDEWEB)

    Pomarol, Alex [Departament de Física, Universitat Autònoma de Barcelona,08193 Bellaterra, Barcelona (Spain); Riva, Francesco [Institut de Théorie des Phénomènes Physiques, EPFL,1015 Lausanne (Switzerland)

    2014-01-27

    With the discovery of the Higgs at the LHC, experiments have finally addressed all aspects of the Standard Model (SM). At this stage, it is important to understand which windows for beyond the SM (BSM) physics are still open, and which are instead tightly closed. We address this question by parametrizing BSM effects with dimension-six operators and performing a global fit to the SM. We separate operators into different groups constrained at different levels, and provide independent bounds on their Wilson coefficients taking into account only the relevant experiments. Our analysis allows to assert in a model-independent way where BSM effects can appear in Higgs physics. In particular, we show that deviations from the SM in the differential distributions of h→Vf-barf are related to other observables, such as triple gauge-boson couplings, and are then already constrained by present data. On the contrary, BR(h→Zγ) can still hide large deviations from the SM.

  10. Technical note: A linear model for predicting δ13 Cprotein.

    Science.gov (United States)

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  11. Traffic Prediction Scheme based on Chaotic Models in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Xiangrong Feng

    2013-09-01

    Full Text Available Based on the local support vector algorithm of chaotic time series analysis, the Hannan-Quinn information criterion and SAX symbolization are introduced. Then a novel prediction algorithm is proposed, which is successfully applied to the prediction of wireless network traffic. For the correct prediction problems of short-term flow with smaller data set size, the weakness of the algorithms during model construction is analyzed by study and comparison to LDK prediction algorithm. It is verified the Hannan-Quinn information principle can be used to calculate the number of neighbor points to replace pervious empirical method, which uses the number of neighbor points to acquire more accurate prediction model. Finally, actual flow data is applied to confirm the accuracy rate of the proposed algorithm LSDHQ. It is testified by our experiments that it also has higher performance in adaptability than that of LSDHQ algorithm.

  12. Toward a predictive model for elastomer seals

    Science.gov (United States)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.

  13. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  14. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  15. Active diagnosis of hybrid systems - A model predictive approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh;

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty...... outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeated until the fault is detected by a passive diagnoser. It is demonstrated how the generated excitation signal...

  16. Aero-acoustic noise of wind turbines. Noise prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Maribo Pedersen, B. [ed.

    1997-12-31

    Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)

  17. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard;

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  18. Modelling and prediction of non-stationary optical turbulence behaviour

    Science.gov (United States)

    Doelman, Niek; Osborn, James

    2016-07-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument installed at the Isaac Newton Telescope at La Palma. Based on an estimate of the power spectral density function, a low order stochastic model to capture the temporal variability of r0 is proposed. The impact of this type of stochastic model on the prediction of the coherence length behaviour is shown.

  19. Research on Drag Torque Prediction Model for the Wet Clutches

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Considering the surface tension effect and centrifugal effect, a mathematical model based on Reynolds equation for predicting the drag torque of disengage wet clutches is presented. The model indicates that the equivalent radius is a function of clutch speed and flow rate. The drag torque achieves its peak at a critical speed. Above this speed, drag torque drops due to the shrinking of the oil film. The model also points out that viscosity and flow rate effects on drag torque. Experimental results indicate that the model is reasonable and it performs well for predicting the drag torque peak.

  20. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  1. Development and application of chronic disease risk prediction models.

    Science.gov (United States)

    Oh, Sun Min; Stefani, Katherine M; Kim, Hyeon Chang

    2014-07-01

    Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea.

  2. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  3. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  4. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  5. Model Predictive Control of Wind Turbines

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian

    the need for maintenance of the wind turbine. Either way, better total-cost-of-ownership for wind turbine operators can be achieved by improved control of the wind turbines. Wind turbine control can be improved in two ways, by improving the model on which the controller bases its design or by improving......Wind turbines play a major role in the transformation from a fossil fuel based energy production to a more sustainable production of energy. Total-cost-of-ownership is an important parameter when investors decide in which energy technology they should place their capital. Modern wind turbines...... are controlled by pitching the blades and by controlling the electro-magnetic torque of the generator, thus slowing the rotation of the blades. Improved control of wind turbines, leading to reduced fatigue loads, can be exploited by using less materials in the construction of the wind turbine or by reducing...

  6. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  7. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  8. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    van Waterschoot Toon

    2008-01-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  9. Hidden Markov models for prediction of protein features

    DEFF Research Database (Denmark)

    Bystroff, Christopher; Krogh, Anders

    2008-01-01

    Hidden Markov Models (HMMs) are an extremely versatile statistical representation that can be used to model any set of one-dimensional discrete symbol data. HMMs can model protein sequences in many ways, depending on what features of the protein are represented by the Markov states. For protein...... structure prediction, states have been chosen to represent either homologous sequence positions, local or secondary structure types, or transmembrane locality. The resulting models can be used to predict common ancestry, secondary or local structure, or membrane topology by applying one of the two standard...... algorithms for comparing a sequence to a model. In this chapter, we review those algorithms and discuss how HMMs have been constructed and refined for the purpose of protein structure prediction....

  10. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...... connectivity approach. The development of these models requires measured property data and based on them, the regression of model parameters is performed. Although this class of models is empirical by nature, they do allow extrapolation from the regressed model parameters to predict properties of chemicals...... not included in the measured data-set. Therefore, they are also considered as predictive models. The paper will highlight different issues/challenges related to the role of the databases and the mathematical and thermodynamic consistency of the measured/estimated data, the predictive nature of the developed...

  11. Modeling, Prediction, and Control of Heating Temperature for Tube Billet

    Directory of Open Access Journals (Sweden)

    Yachun Mao

    2015-01-01

    Full Text Available Annular furnaces have multivariate, nonlinear, large time lag, and cross coupling characteristics. The prediction and control of the exit temperature of a tube billet are important but difficult. We establish a prediction model for the final temperature of a tube billet through OS-ELM-DRPLS method. We address the complex production characteristics, integrate the advantages of PLS and ELM algorithms in establishing linear and nonlinear models, and consider model update and data lag. Based on the proposed model, we design a prediction control algorithm for tube billet temperature. The algorithm is validated using the practical production data of Baosteel Co., Ltd. Results show that the model achieves the precision required in industrial applications. The temperature of the tube billet can be controlled within the required temperature range through compensation control method.

  12. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  13. Experimental study on prediction model for maximum rebound ratio

    Institute of Scientific and Technical Information of China (English)

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong

    2007-01-01

    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  14. Groundwater Level Prediction using M5 Model Trees

    Science.gov (United States)

    Nalarajan, Nitha Ayinippully; Mohandas, C.

    2015-01-01

    Groundwater is an important resource, readily available and having high economic value and social benefit. Recently, it had been considered a dependable source of uncontaminated water. During the past two decades, increased rate of extraction and other greedy human actions have resulted in the groundwater crisis, both qualitatively and quantitatively. Under prevailing circumstances, the availability of predicted groundwater levels increase the importance of this valuable resource, as an aid in the planning of groundwater resources. For this purpose, data-driven prediction models are widely used in the present day world. M5 model tree (MT) is a popular soft computing method emerging as a promising method for numeric prediction, producing understandable models. The present study discusses the groundwater level predictions using MT employing only the historical groundwater levels from a groundwater monitoring well. The results showed that MT can be successively used for forecasting groundwater levels.

  15. A Fusion Model for CPU Load Prediction in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dayu Xu

    2013-11-01

    Full Text Available Load prediction plays a key role in cost-optimal resource allocation and datacenter energy saving. In this paper, we use real-world traces from Cloud platform and propose a fusion model to forecast the future CPU loads. First, long CPU load time series data are divided into short sequences with same length from the historical data on the basis of cloud control cycle. Then we use kernel fuzzy c-means clustering algorithm to put the subsequences into different clusters. For each cluster, with current load sequence, a genetic algorithm optimized wavelet Elman neural network prediction model is exploited to predict the CPU load in next time interval. Finally, we obtain the optimal cloud computing CPU load prediction results from the cluster and its corresponding predictor with minimum forecasting error. Experimental results show that our algorithm performs better than other models reported in previous works.

  16. Modelling proteins' hidden conformations to predict antibiotic resistance

    Science.gov (United States)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  17. A Hybrid Neural Network Prediction Model of Air Ticket Sales

    Directory of Open Access Journals (Sweden)

    Han-Chen Huang

    2013-11-01

    Full Text Available Air ticket sales revenue is an important source of revenue for travel agencies, and if future air ticket sales revenue can be accurately forecast, travel agencies will be able to advance procurement to achieve a sufficient amount of cost-effective tickets. Therefore, this study applied the Artificial Neural Network (ANN and Genetic Algorithms (GA to establish a prediction model of travel agency air ticket sales revenue. By verifying the empirical data, this study proved that the established prediction model has accurate prediction power, and MAPE (mean absolute percentage error is only 9.11%. The established model can provide business operators with reliable and efficient prediction data as a reference for operational decisions.

  18. E-commerce business model mining and prediction

    Institute of Scientific and Technical Information of China (English)

    Zhou-zhou HE; Zhong-fei ZHANG; Chun-ming CHEN; Zheng-gang WANG

    2015-01-01

    We study the problem of business model mining and prediction in the e-commerce context. Unlike most existing approaches where this is typically formulated as a regression problem or a time-series prediction problem, we take a different formulation to this problem by noting that these existing approaches fail to consider the potential relationships both among the consumers (consumer infl uence) and among the shops (competitions or collaborations). Taking this observation into consideration, we propose a new method for e-commerce business model mining and prediction, called EBMM, which combines regression with community analysis. The challenge is that the links in the network are typically not directly observed, which is addressed by applying information diffusion theory through the consumer-shop network. Extensive evaluations using Alibaba Group e-commerce data demonstrate the promise and superiority of EBMM to the state-of-the-art methods in terms of business model mining and prediction.

  19. Model for Predicting Passage of Invasive Fish Species Through Culverts

    Science.gov (United States)

    Neary, V.

    2010-12-01

    Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.

  20. Some Remarks on CFD Drag Prediction of an Aircraft Model

    Science.gov (United States)

    Peng, S. H.; Eliasson, P.

    Observed in CFD drag predictions for the DLR-F6 aircraft model with various configurations, some issues are addressed. The emphasis is placed on the effect of turbulence modeling and grid resolution. With several different turbulence models, the predicted flow feature around the aircraft is highlighted. It is shown that the prediction of the separation bubble in the wing-body junction is closely related to the inherent modeling mechanism of turbulence production. For the configuration with an additional fairing, which has effectively removed the separation bubble, it is illustrated that the drag prediction may be altered even for attached turbulent boundary layer when different turbulence models are used. Grid sensitivity studies are performed with two groups of subsequently refined grids. It is observed that, in contrast to the lift, the drag prediction is rather sensitive to the grid refinement, as well as to the artificial diffusion added for solving the turbulence transport equation. It is demonstrated that an effective grid refinement should drive the predicted drag components monotonically and linearly converged to a finite value.

  1. Signature prediction for model-based automatic target recognition

    Science.gov (United States)

    Keydel, Eric R.; Lee, Shung W.

    1996-06-01

    The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.

  2. Multi-model ensemble hydrologic prediction and uncertainties analysis

    Directory of Open Access Journals (Sweden)

    S. Jiang

    2014-09-01

    Full Text Available Modelling uncertainties (i.e. input errors, parameter uncertainties and model structural errors inevitably exist in hydrological prediction. A lot of recent attention has focused on these, of which input error modelling, parameter optimization and multi-model ensemble strategies are the three most popular methods to demonstrate the impacts of modelling uncertainties. In this paper the Xinanjiang model, the Hybrid rainfall–runoff model and the HYMOD model were applied to the Mishui Basin, south China, for daily streamflow ensemble simulation and uncertainty analysis. The three models were first calibrated by two parameter optimization algorithms, namely, the Shuffled Complex Evolution method (SCE-UA and the Shuffled Complex Evolution Metropolis method (SCEM-UA; next, the input uncertainty was accounted for by introducing a normally-distributed error multiplier; then, the simulation sets calculated from the three models were combined by Bayesian model averaging (BMA. The results show that both these parameter optimization algorithms generate good streamflow simulations; specifically the SCEM-UA can imply parameter uncertainty and give the posterior distribution of the parameters. Considering the precipitation input uncertainty, the streamflow simulation precision does not improve very much. While the BMA combination not only improves the streamflow prediction precision, it also gives quantitative uncertainty bounds for the simulation sets. The SCEM-UA calculated prediction interval is better than the SCE-UA calculated one. These results suggest that considering the model parameters' uncertainties and doing multi-model ensemble simulations are very practical for streamflow prediction and flood forecasting, from which more precision prediction and more reliable uncertainty bounds can be generated.

  3. Model predictive torque control with an extended prediction horizon for electrical drive systems

    Science.gov (United States)

    Wang, Fengxiang; Zhang, Zhenbin; Kennel, Ralph; Rodríguez, José

    2015-07-01

    This paper presents a model predictive torque control method for electrical drive systems. A two-step prediction horizon is achieved by considering the reduction of the torque ripples. The electromagnetic torque and the stator flux error between predicted values and the references, and an over-current protection are considered in the cost function design. The best voltage vector is selected by minimising the value of the cost function, which aims to achieve a low torque ripple in two intervals. The study is carried out experimentally. The results show that the proposed method achieves good performance in both steady and transient states.

  4. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  5. Validation of Biomarker-based risk prediction models

    OpenAIRE

    Taylor, Jeremy M.G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2008-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Inte...

  6. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  7. Exploring the Magnetic Susceptibility of a Haldane Compound Sm2BaNiO5: Optical Spectroscopy of Sm^{3+} Kramers Doublets

    Science.gov (United States)

    Galkin, A. S.; Klimin, S. A.

    2016-12-01

    An optical spectroscopic study of quasi-Haldane chain nickelate Sm2BaNiO5 is presented. A temperature-dependent splitting of the ground-state Kramers doublet of the Sm^{3+} ion due to an antiferromagnetic ordering at TN = 55 K has been obtained experimentally and used to calculate the Schottky-type anomaly in magnetic susceptibility. The value of the magnetic moment of Sm^{3+} ion at zero temperature has been estimated within the model of the ground doublet. One-dimensional magnetic behavior of the nickel subsystem is emphasized.

  8. Predicting Solar Cycle 25 using Surface Flux Transport Model

    Science.gov (United States)

    Imada, Shinsuke; Iijima, Haruhisa; Hotta, Hideyuki; Shiota, Daiko; Kusano, Kanya

    2017-08-01

    It is thought that the longer-term variations of the solar activity may affect the Earth’s climate. Therefore, predicting the next solar cycle is crucial for the forecast of the “solar-terrestrial environment”. To build prediction schemes for the next solar cycle is a key for the long-term space weather study. Recently, the relationship between polar magnetic field at the solar minimum and next solar activity is intensively discussed. Because we can determine the polar magnetic field at the solar minimum roughly 3 years before the next solar maximum, we may discuss the next solar cycle 3years before. Further, the longer term (~5 years) prediction might be achieved by estimating the polar magnetic field with the Surface Flux Transport (SFT) model. Now, we are developing a prediction scheme by SFT model as a part of the PSTEP (Project for Solar-Terrestrial Environment Prediction) and adapting to the Cycle 25 prediction. The predicted polar field strength of Cycle 24/25 minimum is several tens of percent smaller than Cycle 23/24 minimum. The result suggests that the amplitude of Cycle 25 is weaker than the current cycle. We also try to obtain the meridional flow, differential rotation, and turbulent diffusivity from recent modern observations (Hinode and Solar Dynamics Observatory). These parameters will be used in the SFT models to predict the polar magnetic fields strength at the solar minimum. In this presentation, we will explain the outline of our strategy to predict the next solar cycle and discuss the initial results for Cycle 25 prediction.

  9. The complete genome sequence of the dominant Sinorhizobium meliloti field isolate SM11 extends the S. meliloti pan-genome.

    Science.gov (United States)

    Schneiker-Bekel, Susanne; Wibberg, Daniel; Bekel, Thomas; Blom, Jochen; Linke, Burkhard; Neuweger, Heiko; Stiens, Michael; Vorhölter, Frank-Jörg; Weidner, Stefan; Goesmann, Alexander; Pühler, Alfred; Schlüter, Andreas

    2011-08-20

    Isolates of the symbiotic nitrogen-fixing species Sinorhizobium meliloti usually contain a chromosome and two large megaplasmids encoding functions that are absolutely required for the specific interaction of the microsymbiont with corresponding host plants leading to an effective symbiosis. The complete genome sequence, including the megaplasmids pSmeSM11c (related to pSymA) and pSmeSM11d (related to pSymB), was established for the dominant, indigenous S. meliloti strain SM11 that had been isolated during a long-term field release experiment with genetically modified S. meliloti strains. The chromosome, the largest replicon of S. meliloti SM11, is 3,908,022bp in size and codes for 3785 predicted protein coding sequences. The size of megaplasmid pSmeSM11c is 1,633,319bp and it contains 1760 predicted protein coding sequences whereas megaplasmid pSmeSM11d is 1,632,395bp in size and comprises 1548 predicted coding sequences. The gene content of the SM11 chromosome is quite similar to that of the reference strain S. meliloti Rm1021. Comparison of pSmeSM11c to pSymA of the reference strain revealed that many gene regions of these replicons are variable, supporting the assessment that pSymA is a major hot-spot for intra-specific differentiation. Plasmids pSymA and pSmeSM11c both encode unique genes. Large gene regions of pSmeSM11c are closely related to corresponding parts of Sinorhizobium medicae WSM419 plasmids. Moreover, pSmeSM11c encodes further novel gene regions, e.g. additional plasmid survival genes (partition, mobilisation and conjugative transfer genes), acdS encoding 1-aminocyclopropane-1-carboxylate deaminase involved in modulation of the phytohormone ethylene level and genes having predicted functions in degradative capabilities, stress response, amino acid metabolism and associated pathways. In contrast to Rm1021 pSymA and pSmeSM11c, megaplasmid pSymB of strain Rm1021 and pSmeSM11d are highly conserved showing extensive synteny with only few rearrangements

  10. Predicting soil acidification trends at Plynlimon using the SAFE model

    Directory of Open Access Journals (Sweden)

    B. Reynolds

    1997-01-01

    Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.

  11. Physics-Informed Machine Learning for Predictive Turbulence Modeling: A Priori Assessment of Prediction Confidence

    CERN Document Server

    Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-01-01

    Although Reynolds-Averaged Navier-Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. Recently, a data-driven turbulence modeling approach via machine learning has been proposed to predict the Reynolds stress anisotropy of a given flow based on high fidelity data from closely related flows. In this work, the closeness of different flows is investigated to assess the prediction confidence a priori. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique...

  12. Extended Range Hydrological Predictions: Uncertainty Associated with Model Parametrization

    Science.gov (United States)

    Joseph, J.; Ghosh, S.; Sahai, A. K.

    2016-12-01

    The better understanding of various atmospheric processes has led to improved predictions of meteorological conditions at various temporal scale, ranging from short term which cover a period up to 2 days to long term covering a period of more than 10 days. Accurate prediction of hydrological variables can be done using these predicted meteorological conditions, which would be helpful in proper management of water resources. Extended range hydrological simulation includes the prediction of hydrological variables for a period more than 10 days. The main sources of uncertainty in hydrological predictions include the uncertainty in the initial conditions, meteorological forcing and model parametrization. In the present study, the Extended Range Prediction developed for India for monsoon by Indian Institute of Tropical Meteorology (IITM), Pune is used as meteorological forcing for the Variable Infiltration Capacity (VIC) model. Sensitive hydrological parameters, as derived from literature, along with a few vegetation parameters are assumed to be uncertain and 1000 random values are generated given their prescribed ranges. Uncertainty bands are generated by performing Monte-Carlo Simulations (MCS) for the generated sets of parameters and observed meteorological forcings. The basins with minimum human intervention, within the Indian Peninsular region, are identified and validation of results are carried out using the observed gauge discharge. Further, the uncertainty bands are generated for the extended range hydrological predictions by performing MCS for the same set of parameters and extended range meteorological predictions. The results demonstrate the uncertainty associated with the model parametrisation for the extended range hydrological simulations. Keywords: Extended Range Prediction, Variable Infiltration Capacity model, Monte Carlo Simulation.

  13. Predictive modeling of coral disease distribution within a reef system.

    Directory of Open Access Journals (Sweden)

    Gareth J Williams

    Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to

  14. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  15. Precise methods for conducted EMI modeling,analysis,and prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0―10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  16. Precise methods for conducted EMI modeling,analysis, and prediction

    Institute of Scientific and Technical Information of China (English)

    MA WeiMing; ZHAO ZhiHua; MENG Jin; PAN QiJun; ZHANG Lei

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0-10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  17. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common......New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... Loops, are applied to complex management decisions. Predictive modeling specifics are briefed. A preliminary optimal game modeling technique is presented in brief with applications to innovation and R&D management. Conducting gap and risk analysis can assist with this process. Example application areas...

  18. Predicting nucleosome positioning using a duration Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Widom Jonathan

    2010-06-01

    Full Text Available Abstract Background The nucleosome is the fundamental packing unit of DNAs in eukaryotic cells. Its detailed positioning on the genome is closely related to chromosome functions. Increasing evidence has shown that genomic DNA sequence itself is highly predictive of nucleosome positioning genome-wide. Therefore a fast software tool for predicting nucleosome positioning can help understanding how a genome's nucleosome organization may facilitate genome function. Results We present a duration Hidden Markov model for nucleosome positioning prediction by explicitly modeling the linker DNA length. The nucleosome and linker models trained from yeast data are re-scaled when making predictions for other species to adjust for differences in base composition. A software tool named NuPoP is developed in three formats for free download. Conclusions Simulation studies show that modeling the linker length distribution and utilizing a base composition re-scaling method both improve the prediction of nucleosome positioning regarding sensitivity and false discovery rate. NuPoP provides a user-friendly software tool for predicting the nucleosome occupancy and the most probable nucleosome positioning map for genomic sequences of any size. When compared with two existing methods, NuPoP shows improved performance in sensitivity.

  19. Experience-based model predictive control using reinforcement learning

    NARCIS (Netherlands)

    Negenborn, R.R.; De Schutter, B.; Wiering, M.A.; Hellendoorn, J.

    2004-01-01

    Model predictive control (MPC) is becoming an increasingly popular method to select actions for controlling dynamic systems. TraditionallyMPC uses a model of the system to be controlled and a performance function to characterize the desired behavior of the system. The MPC agent finds actions over a

  20. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields th

  1. Katz model prediction of Caenorhabditis elegans mutagenesis on STS-42

    Science.gov (United States)

    Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Badhwar, Gautam D.

    1992-01-01

    Response parameters that describe the production of recessive lethal mutations in C. elegans from ionizing radiation are obtained with the Katz track structure model. The authors used models of the space radiation environment and radiation transport to predict and discuss mutation rates for C. elegans on the IML-1 experiment aboard STS-42.

  2. A model to predict the sound reflection from forests

    NARCIS (Netherlands)

    Wunderli, J.M.; Salomons, E.M.

    2009-01-01

    A model is presented to predict the reflection of sound at forest edges. A single tree is modelled as a vertical cylinder. For the reflection at a cylinder an analytical solution is given based on the theory of scattering of spherical waves. The entire forest is represented by a line of cylinders

  3. Atmospheric modelling for seasonal prediction at the CSIR

    CSIR Research Space (South Africa)

    Landman, WA

    2014-10-01

    Full Text Available by observed monthly sea-surface temperature (SST) and sea-ice fields. The AGCM is the conformal-cubic atmospheric model (CCAM) administered by the Council for Scientific and Industrial Research. Since the model is forced with observed rather than predicted...

  4. A model to predict the sound reflection from forests

    NARCIS (Netherlands)

    Wunderli, J.M.; Salomons, E.M.

    2009-01-01

    A model is presented to predict the reflection of sound at forest edges. A single tree is modelled as a vertical cylinder. For the reflection at a cylinder an analytical solution is given based on the theory of scattering of spherical waves. The entire forest is represented by a line of cylinders pl

  5. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a network

  6. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...... masks degenerate to a noise vocoder....

  7. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields

  8. A Climate System Model, Numerical Simulation and Climate Predictability

    Institute of Scientific and Technical Information of China (English)

    ZENG Qingcun; WANG Huijun; LIN Zhaohui; ZHOU Guangqing; YU Yongqiang

    2007-01-01

    @@ The implementation of the project has lasted for more than 20 years. As a result, the following key innovative achievements have been obtained, ranging from the basic theory of climate dynamics, numerical model development and its related computational theory to the dynamical climate prediction using the climate system models:

  9. Prediction horizon effects on stochastic modelling hints for neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Drossu, R.; Obradovic, Z. [Washington State Univ., Pullman, WA (United States)

    1995-12-31

    The objective of this paper is to investigate the relationship between stochastic models and neural network (NN) approaches to time series modelling. Experiments on a complex real life prediction problem (entertainment video traffic) indicate that prior knowledge can be obtained through stochastic analysis both with respect to an appropriate NN architecture as well as to an appropriate sampling rate, in the case of a prediction horizon larger than one. An improvement of the obtained NN predictor is also proposed through a bias removal post-processing, resulting in much better performance than the best stochastic model.

  10. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  11. Predicting the Yield Stress of SCC using Materials Modelling

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Hasholt, Marianne Tange; Pade, Claus

    2005-01-01

    A conceptual model for predicting the Bingham rheological parameter yield stress of SCC has been established. The model used here is inspired by previous work of Oh et al. (1), predicting that the yield stress of concrete relative to the yield stress of paste is a function of the relative thickness...... of excess paste around the aggregate. The thickness of excess paste is itself a function of particle shape, particle size distribution, and particle packing. Seven types of SCC were tested at four different excess paste contents in order to verify the conceptual model. Paste composition and aggregate shape...... and distribution were varied between SCC types. The results indicate that yield stress of SCC may be predicted using the model....

  12. Stability of theoretical model for catastrophic weather prediction

    Institute of Scientific and Technical Information of China (English)

    SHI Wei-hui; WANG Yue-peng

    2007-01-01

    Stability related to theoretical model for catastrophic weather prediction,which includes non-hydrostatic perfect elastic model and anelastic model, is discussed and analyzed in detail. It is proved that non-hydrostatic perfect elastic equations set is stable in the class of infinitely differentiable function. However, for the anelastic equations set, its continuity equation is changed in form because of the particular hypothesis for fluid, so "the matching consisting of both viscosity coefficient and incompressible assumption" appears, thereby the most important equations set of this class in practical prediction shows the same instability in topological property as Navier-Stokes equation,which should be avoided first in practical numerical prediction. In light of this, the referenced suggestions to amend the applied model are finally presented.

  13. Comparison of tropospheric scintillation prediction models of the Indonesian climate

    Science.gov (United States)

    Chen, Cheng Yee; Singh, Mandeep Jit

    2014-12-01

    Tropospheric scintillation is a phenomenon that will cause signal degradation in satellite communication with low fade margin. Few studies of scintillation have been conducted in tropical regions. To analyze tropospheric scintillation, we obtain data from a satellite link installed at Bandung, Indonesia, at an elevation angle of 64.7° and a frequency of 12.247 GHz from 1999 to 2000. The data are processed and compared with the predictions of several well-known scintillation prediction models. From the analysis, we found that the ITU-R model gives the lowest error rate when predicting the scintillation intensity for fade at 4.68%. However, the model should be further tested using data from higher-frequency bands, such as the K and Ka bands, to verify the accuracy of the model.

  14. [Predicting suicide or predicting the unpredictable in an uncertain world: Reinforcement Learning Model-Based analysis].

    Science.gov (United States)

    Desseilles, Martin

    2012-01-01

    In general, it appears that the suicidal act is highly unpredictable with the current scientific means available. In this article, the author submits the hypothesis that predicting suicide is complex because it results in predicting a choice, in itself unpredictable. The article proposes a Reinforcement learning model-based analysis. In this model, we integrate on the one hand, four ascending modulatory neurotransmitter systems (acetylcholine, noradrenalin, serotonin, and dopamine) with their regions of respective projections and afferences, and on the other hand, various observations of brain imaging identified until now in the suicidal process.

  15. Evolution of Sm2Fe17 Alloys during Hydrogenation-Disproportion Process

    Institute of Scientific and Technical Information of China (English)

    Ye Jinwen; Liu Ying; Li Meng; Gao Shengji; Tu Mingjing

    2005-01-01

    The evolution of phase compositions, phase change and microstructure in Sm2Fe17 alloys during hydrogenation-disproportion process were systematically studied with XRD, SEM, EDX methods. Research indicates that HD process of Sm2Fe17 alloys is as follows: Sm2Fe17 alloy absorbs hydrogen first in the atmosphere of hydrogenation with a pressure of 0.1 MPa. Disproportionation begins at T≥500 ℃, then the alloys turn into lots of SmHx and α-Fe phases which are partly in microcrystal or amorphous structures. Along with the increase of temperature, the microcrystal and amorphous structures transformed into crystal structure and this transformation thoroughly completed at 750 ℃. The size of obtained crystal grain is about 20~100 nm. Based on the experimental data, a microstructural transformation model of Sm2Fe17 alloys during hydrogenation-disproportion process was made.

  16. New neutron-rich isotope production in 154Sm+160Gd

    Directory of Open Access Journals (Sweden)

    Ning Wang

    2016-09-01

    Full Text Available Deep inelastic scattering in 154Sm+160Gd at energies above the Bass barrier is for the first time investigated with two different microscopic dynamics approaches: improved quantum molecular dynamics (ImQMD model and time dependent Hartree–Fock (TDHF theory. No fusion is observed from both models. The capture pocket disappears for this reaction due to strong Coulomb repulsion and the contact time of the di-nuclear system formed in head-on collisions is about 700 fm/c at an incident energy of 440 MeV. The isotope distribution of fragments in the deep inelastic scattering process is predicted with the simulations of the latest ImQMD-v2.2 model together with a statistical code (GEMINI for describing the secondary decay of fragments. More than 40 extremely neutron-rich unmeasured nuclei with 58≤Z≤76 are observed and the production cross sections are at the order of μb to mb. The multi-nucleon transfer reaction of Sm+Gd could be an alternative way to synthesize new neutron-rich lanthanides which are difficult to be produced with traditional fusion reactions or fission of actinides.

  17. New neutron-rich isotope production in 154Sm+160Gd

    Science.gov (United States)

    Wang, Ning; Guo, Lu

    2016-09-01

    Deep inelastic scattering in 154Sm+160Gd at energies above the Bass barrier is for the first time investigated with two different microscopic dynamics approaches: improved quantum molecular dynamics (ImQMD) model and time dependent Hartree-Fock (TDHF) theory. No fusion is observed from both models. The capture pocket disappears for this reaction due to strong Coulomb repulsion and the contact time of the di-nuclear system formed in head-on collisions is about 700 fm/c at an incident energy of 440 MeV. The isotope distribution of fragments in the deep inelastic scattering process is predicted with the simulations of the latest ImQMD-v2.2 model together with a statistical code (GEMINI) for describing the secondary decay of fragments. More than 40 extremely neutron-rich unmeasured nuclei with 58 ≤ Z ≤ 76 are observed and the production cross sections are at the order of μb to mb. The multi-nucleon transfer reaction of Sm+Gd could be an alternative way to synthesize new neutron-rich lanthanides which are difficult to be produced with traditional fusion reactions or fission of actinides.

  18. New neutron-rich isotope production in $^{154}$Sm+$^{160}$Gd

    CERN Document Server

    Wang, Ning

    2016-01-01

    Deep inelastic scattering in $^{154}$Sm+$^{160}$Gd at energies above the Bass barrier is for the first time investigated with two different microscopic dynamics approaches: improved quantum molecular dynamics (ImQMD) model and time dependent Hartree-Fock (TDHF) theory. No fusion is observed from both models. The capture pocket disappears for this reaction due to strong Coulomb repulsion and the contact time of the di-nuclear system formed in head-on collisions is about 700 fm/c at an incident energy of 440 MeV. The isotope distribution of fragments in the deep inelastic scattering process is predicted with the simulations of the latest ImQMD-v2.2 model together with a statistical code (GEMINI) for describing the secondary decay of fragments. More than 40 extremely neutron-rich unmeasured nuclei with $58 \\le Z\\le 76$ are observed and the production cross sections are at the order of ${\\rm \\mu b}$ to mb. The multi-nucleon transfer reaction of Sm+Gd could be an alternative way to synthesize new neutron-rich lant...

  19. Statistical characteristics of irreversible predictability time in regional ocean models

    Directory of Open Access Journals (Sweden)

    P. C. Chu

    2005-01-01

    Full Text Available Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF of the irreversible predictability time (IPT (called τ-PDF computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks and intermediate (up to two months predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain are not outliers and share the same statistics as a whole ensemble of predictions.

  20. Automatic predictions in the Georgi-Machacek model at next-to-leading order accuracy

    CERN Document Server

    Degrande, Celine; Logan, Heather E; Peterson, Andrea D; Zaro, Marco

    2015-01-01

    We study the phenomenology of the Georgi-Machacek model at next-to-leading order (NLO) in QCD matched to parton shower, using a fully-automated tool chain based on MadGraph5_aMC@NLO and FeynRules. We focus on the production of the fermiophobic custodial fiveplet scalars H_5^0, H_5^+/-, and H_5^++/-- through vector boson fusion (VBF), associated production with a vector boson (V H_5), and scalar pair production (H_5 H_5). For these production mechanisms we compute NLO corrections to production rates as well as to differential distributions. Our results demonstrate that the Standard Model (SM) overall K-factors for such processes cannot in general be directly applied to beyond-the-SM distributions, due both to differences in the scalar electroweak charges and to variation of the K-factors over the differential distributions.

  1. Determining the prediction limits of models and classifiers with applications for disruption prediction in JET

    Science.gov (United States)

    Murari, A.; Peluso, E.; Vega, J.; Gelfusa, M.; Lungaroni, M.; Gaudio, P.; Martínez, F. J.; Contributors, JET

    2017-01-01

    Understanding the many aspects of tokamak physics requires the development of quite sophisticated models. Moreover, in the operation of the devices, prediction of the future evolution of discharges can be of crucial importance, particularly in the case of the prediction of disruptions, which can cause serious damage to various parts of the machine. The determination of the limits of predictability is therefore an important issue for modelling, classifying and forecasting. In all these cases, once a certain level of performance has been reached, the question typically arises as to whether all the information available in the data has been exploited, or whether there are still margins for improvement of the tools being developed. In this paper, a theoretical information approach is proposed to address this issue. The excellent properties of the developed indicator, called the prediction factor (PF), have been proved with the help of a series of numerical tests. Its application to some typical behaviour relating to macroscopic instabilities in tokamaks has shown very positive results. The prediction factor has also been used to assess the performance of disruption predictors running in real time in the JET system, including the one systematically deployed in the feedback loop for mitigation purposes. The main conclusion is that the most advanced predictors basically exploit all the information contained in the locked mode signal on which they are based. Therefore, qualitative improvements in disruption prediction performance in JET would need the processing of additional signals, probably profiles.

  2. A predictive model of music preference using pairwise comparisons

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan

    2012-01-01

    Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...... function and evaluate the predictive performance on a novel dataset. In a recommendation style setting we obtain a leave-one-out accuracy of 74% compared to 50% with random predictions, showing potential for further refinement and evaluation....

  3. The ARIC predictive model reliably predicted risk of type II diabetes in Asian populations

    Directory of Open Access Journals (Sweden)

    Chin Calvin

    2012-04-01

    Full Text Available Abstract Background Identification of high-risk individuals is crucial for effective implementation of type 2 diabetes mellitus prevention programs. Several studies have shown that multivariable predictive functions perform as well as the 2-hour post-challenge glucose in identifying these high-risk individuals. The performance of these functions in Asian populations, where the rise in prevalence of type 2 diabetes mellitus is expected to be the greatest in the next several decades, is relatively unknown. Methods Using data from three Asian populations in Singapore, we compared the performance of three multivariate predictive models in terms of their discriminatory power and calibration quality: the San Antonio Health Study model, Atherosclerosis Risk in Communities model and the Framingham model. Results The San Antonio Health Study and Atherosclerosis Risk in Communities models had better discriminative powers than using only fasting plasma glucose or the 2-hour post-challenge glucose. However, the Framingham model did not perform significantly better than fasting glucose or the 2-hour post-challenge glucose. All published models suffered from poor calibration. After recalibration, the Atherosclerosis Risk in Communities model achieved good calibration, the San Antonio Health Study model showed a significant lack of fit in females and the Framingham model showed a significant lack of fit in both females and males. Conclusions We conclude that adoption of the ARIC model for Asian populations is feasible and highly recommended when local prospective data is unavailable.

  4. Models predicting non-sentinel node involvement also predict for regional recurrence in breast cancer patients without axillary treatment

    NARCIS (Netherlands)

    Pepels, M.J.; Vestjens, J.H.; Boer, M. de; Bult, P.; Dijck, J.A.A.M. van; Menke-Pluijmers, M.; Diest, P.J. van; Borm, G.; Tjan-Heijnen, V.C.

    2013-01-01

    BACKGROUND: Non-SN prediction models are frequently used in clinical decision making to identify patients that may not need axillary treatment, but these models still need to be validated by follow-up data. Our purpose was the validation of non-sentinel node (SN) prediction models in predicting

  5. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  6. Predicting nucleic acid binding interfaces from structural models of proteins.

    Science.gov (United States)

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  7. An evaporation duct prediction model coupled with the MM5

    Institute of Scientific and Technical Information of China (English)

    JIAO Lin; ZHANG Yonggang

    2015-01-01

    Evaporation duct is an abnormal refractive phenomenon in the marine atmosphere boundary layer. It has been generally accepted that the evaporation duct prominently affects the performance of the electronic equipment over the sea because of its wide distribution and frequent occurrence. It has become a research focus of the navies all over the world. At present, the diagnostic models of the evaporation duct are all based on the Monin-Obukhov similarity theory, with only differences in the flux and character scale calculations in the surface layer. These models are applicable to the stationary and uniform open sea areas without considering the alongshore effect. This paper introduces the nonlinear factorav and the gust wind itemwg into the Babin model, and thus extends the evaporation duct diagnostic model to the offshore area under extremely low wind speed. In addition, an evaporation duct prediction model is designed and coupled with the fifth generation mesoscale model (MM5). The tower observational data and radar data at the Pingtan island of Fujian Province on May 25–26, 2002 were used to validate the forecast results. The outputs of the prediction model agree with the observations from 0 to 48 h. The relative error of the predicted evaporation duct height is 19.3% and the prediction results are consistent with the radar detection.

  8. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  9. Predictive Models of Li-ion Battery Lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kandler; Wood, Eric; Santhanagopalan, Shriram; Kim, Gi-heon; Shi, Ying; Pesaran, Ahmad

    2015-06-15

    It remains an open question how best to predict real-world battery lifetime based on accelerated calendar and cycle aging data from the laboratory. Multiple degradation mechanisms due to (electro)chemical, thermal, and mechanical coupled phenomena influence Li-ion battery lifetime, each with different dependence on time, cycling and thermal environment. The standardization of life predictive models would benefit the industry by reducing test time and streamlining development of system controls.

  10. Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones

    DEFF Research Database (Denmark)

    Borly, L; Anderson, I B; Bardram, Linda

    1999-01-01

    BACKGROUND: After cholecystectomy for symptomatic gallstone disease 20%-30% of the patients continue to have abdominal pain. The aim of this study was to investigate whether preoperative variables could predict the symptomatic outcome after cholecystectomy. METHODS: One hundred and two patients...... and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...

  11. A predictive fatigue life model for anodized 7050 aluminium alloy

    OpenAIRE

    Chaussumier, Michel; Mabru, Catherine; Shahzad, Majid; Chieragatti, Rémy; Rezaï-Aria, Farhad

    2013-01-01

    International audience; The objective of this study is to predict fatigue life of anodized 7050 aluminum alloy specimens. In the case of anodized 7050-T7451 alloy, fractographic observations of fatigue tested specimens showed that pickling pits were the predominant sites for crack nucleation and subsequent failure. It has been shown that fatigue failure was favored by the presence of multiple cracks. From these experimental results, a fatigue life predictive model has been developed including...

  12. Support vector machine-based multi-model predictive control

    Institute of Scientific and Technical Information of China (English)

    Zhejing BA; Youxian SUN

    2008-01-01

    In this paper,a support vector machine-based multi-model predictive control is proposed,in which SVM classification combines well with SVM regression.At first,each working environment is modeled by SVM regression and the support vector machine network-based model predictive control(SVMN-MPC)algorithm corresponding to each environment is developed,and then a multi-class SVM model is established to recognize multiple operating conditions.As for control,the current environment is identified by the multi-class SVM model and then the corresponding SVMN.MPCcontroller is activated at each sampling instant.The proposed modeling,switching and controller design is demonstrated in simulation results.

  13. Robust Model Predictive Control of a Wind Turbine

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...

  14. A Prediction Model of MF Radiation in Environmental Assessment

    Institute of Scientific and Technical Information of China (English)

    HE-SHAN GE; YAN-FENG HONG

    2006-01-01

    Objective To predict the impact of MF radiation on human health.Methods The vertical distribution of field intensity was estimated by analogism on the basis of measured values from simulation measurement. Results A kind of analogism on the basis of geometric proportion decay pattern is put forward in the essay. It showed that with increasing of height the field intensity increased according to geometric proportion law. Conclusion This geometric proportion prediction model can be used to estimate the impact of MF radiation on inhabited environment, and can act as a reference pattern in predicting the environmental impact level of MF radiation.

  15. Community monitoring for youth violence surveillance: testing a prediction model.

    Science.gov (United States)

    Henry, David B; Dymnicki, Allison; Kane, Candice; Quintana, Elena; Cartland, Jenifer; Bromann, Kimberly; Bhatia, Shaun; Wisnieski, Elise

    2014-08-01

    Predictive epidemiology is an embryonic field that involves developing informative signatures for disorder and tracking them using surveillance methods. Through such efforts assistance can be provided to the planning and implementation of preventive interventions. Believing that certain minor crimes indicative of gang activity are informative signatures for the emergence of serious youth violence in communities, in this study we aim to predict outbreaks of violence in neighborhoods from pre-existing levels and changes in reports of minor offenses. We develop a prediction equation that uses publicly available neighborhood-level data on disorderly conduct, vandalism, and weapons violations to predict neighborhoods likely to have increases in serious violent crime. Data for this study were taken from the Chicago Police Department ClearMap reporting system, which provided data on index and non-index crimes for each of the 844 Chicago census tracts. Data were available in three month segments for a single year (fall 2009, winter, spring, and summer 2010). Predicted change in aggravated battery and overall violent crime correlated significantly with actual change. The model was evaluated by comparing alternative models using randomly selected training and test samples, producing favorable results with reference to overfitting, seasonal variation, and spatial autocorrelation. A prediction equation based on winter and spring levels of the predictors had area under the curve ranging from .65 to .71 for aggravated battery, and .58 to .69 for overall violent crime. We discuss future development of such a model and its potential usefulness in violence prevention and community policing.

  16. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  17. Charge transport model to predict intrinsic reliability for dielectric materials

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  18. In silico modeling to predict drug-induced phospholipidosis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov; Sadrieh, Nakissa

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.

  19. On the role of Sm in solidification of Al-Sm metallic glasses

    CERN Document Server

    Bokas, G B; Perepezko, J H; Szlufarska, I

    2016-01-01

    During the solidification of Al-Sm metallic glasses the evolution of the supercooled liquid atomic structure has been identified with an increasing population of icosahedral-like clusters with increasing Sm concentration. These clusters exhibit slower kinetics compared to the remaining clusters in the liquid leading to enhanced amorphous phase stability and glass forming ability (GFA). Maximum icosahedral-ordering and atomic packing density have been found for the Al90Sm10 and Al85Sm15 alloys, respectively, whereas minimum cohesive energy has been found for the Al93Sm7 which is consistent with the range of compositions (from Al92Sm8 to Al84Sm16) found experimentally with high GFA.

  20. Should we believe model predictions of future climate change? (Invited)

    Science.gov (United States)

    Knutti, R.

    2009-12-01

    As computers get faster and our understanding of the climate system improves, climate models to predict the future are getting more complex by including more and more processes, and they are run at higher and higher resolution to resolve more of the small scale processes. As a result, some of the simulated features and structures, e.g. ocean eddies or tropical cyclones look surprisingly real. But are these deceptive? A pattern can look perfectly real but be in the wrong place. So can the current global models really provide the kind of information on local scales and on the quantities (e.g. extreme events) that the decision maker would need to know to invest for example in adaptation? A closer look indicates that evaluating skill of climate models and quantifying uncertainties in predictions is very difficult. This presentation shows that while models are improving in simulating the climate features we observe (e.g. the present day mean state, or the El Nino Southern Oscillation), the spread from multiple models in predicting future changes is often not decreasing. The main problem is that (unlike with weather forecasts for example) we cannot evaluate the model on a prediction (for example for the year 2100) and we have to use the present, or past changes as metrics of skills. But there are infinite ways of testing a model, and many metrics used to test models do not clearly relate to the prediction. Therefore there is little agreement in the community on metrics to separate ‘good’ and ‘bad’ models, and there is a concern that model development, evaluation and posterior weighting or ranking of models are all using the same datasets. While models are continuously improving in representing what we believe to be the key processes, many models also share ideas, parameterizations or even pieces of model code. The current models can therefore not be considered independent. Robustness of a model simulated result is often interpreted as increasing the confidence

  1. Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program

    Science.gov (United States)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.

    2017-01-01

    Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.

  2. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  3. Model for Predicting End User Web Page Response Time

    CERN Document Server

    Nagarajan, Sathya Narayanan

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we d...

  4. Mantis: Predicting System Performance through Program Analysis and Modeling

    CERN Document Server

    Chun, Byung-Gon; Lee, Sangmin; Maniatis, Petros; Naik, Mayur

    2010-01-01

    We present Mantis, a new framework that automatically predicts program performance with high accuracy. Mantis integrates techniques from programming language and machine learning for performance modeling, and is a radical departure from traditional approaches. Mantis extracts program features, which are information about program execution runs, through program instrumentation. It uses machine learning techniques to select features relevant to performance and creates prediction models as a function of the selected features. Through program analysis, it then generates compact code slices that compute these feature values for prediction. Our evaluation shows that Mantis can achieve more than 93% accuracy with less than 10% training data set, which is a significant improvement over models that are oblivious to program features. The system generates code slices that are cheap to compute feature values.

  5. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    In the United States, drought is among the costliest natural hazards, with an annual average of 6 billion dollars in damage. Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Started in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the National Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the meteorological drought predictability using the retrospective NMME forecasts for the period from 1982 to 2010. Before predicting SPI, monthly-mean precipitation (P) forecasts from each model were bias corrected and spatially downscaled (BCSD) to regional grids of 0.5-degree resolution over the contiguous United States based on the probability distribution functions derived from the hindcasts. The corrected P forecasts were then appended to the CPC Unified Precipitation Analysis to form a P time series for computing 3-month and 6-month SPIs. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation and root-mean-square errors against the observations, are used to evaluate forecast skill. For P forecasts, errors vary among models and skill generally is low after the second month. All model P forecasts have higher skill in winter and lower skill in summer. In wintertime, BCSD improves both P and SPI forecast skill. Most improvements are over the western mountainous regions and along the Great Lake. Overall, SPI predictive skill is regionally and seasonally dependent. The six-month SPI forecasts are skillful out to four months. For

  6. Consumer Choice Prediction: Artificial Neural Networks versus Logistic Models

    Directory of Open Access Journals (Sweden)

    Christopher Gan

    2005-01-01

    Full Text Available Conventional econometric models, such as discriminant analysis and logistic regression have been used to predict consumer choice. However, in recent years, there has been a growing interest in applying artificial neural networks (ANN to analyse consumer behaviour and to model the consumer decision-making process. The purpose of this paper is to empirically compare the predictive power of the probability neural network (PNN, a special class of neural networks and a MLFN with a logistic model on consumers’ choices between electronic banking and non-electronic banking. Data for this analysis was obtained through a mail survey sent to 1,960 New Zealand households. The questionnaire gathered information on the factors consumers’ use to decide between electronic banking versus non-electronic banking. The factors include service quality dimensions, perceived risk factors, user input factors, price factors, service product characteristics and individual factors. In addition, demographic variables including age, gender, marital status, ethnic background, educational qualification, employment, income and area of residence are considered in the analysis. Empirical results showed that both ANN models (MLFN and PNN exhibit a higher overall percentage correct on consumer choice predictions than the logistic model. Furthermore, the PNN demonstrates to be the best predictive model since it has the highest overall percentage correct and a very low percentage error on both Type I and Type II errors.

  7. Mathematical models for predicting indoor air quality from smoking activity.

    Science.gov (United States)

    Ott, W R

    1999-05-01

    Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balance model and its application to predicting indoor pollutant concentrations from cigarette smoke and derives the time-averaged version of the model from the basic laws of conservation of mass. A simple table is provided of computed respirable particulate concentrations for any indoor location for which the active smoking count, volume, and concentration decay rate (deposition rate combined with air exchange rate) are known. Using the indoor ventilatory air exchange rate causes slightly higher indoor concentrations and therefore errs on the side of protecting health, since it excludes particle deposition effects, whereas using the observed particle decay rate gives a more accurate prediction of indoor concentrations. This table permits easy comparisons of indoor concentrations with air quality guidelines and indoor standards for different combinations of active smoking counts and air exchange rates. The published literature on mathematical models of environmental tobacco smoke also is reviewed and indicates that these models generally give good agreement between predicted concentrations and actual indoor measurements.

  8. Testing the Predictions of the Universal Structured GRB Jet Model

    CERN Document Server

    Nakar, E; Guetta, D; Nakar, Ehud; Granot, Jonathan; Guetta, Dafne

    2004-01-01

    The two leading models for the structure of GRB jets are (1) the uniform jet model, where the energy per solid angle, $\\epsilon$, is roughly constant within some finite half-opening angle, $\\theta_j$, and sharply drops outside of $\\theta_j$, and (2) the universal structured jet (USJ) model, where all GRB jets are intrinsically identical, and $\\epsilon$ drops as the inverse square of the angle from the jet axis. The simplicity of the USJ model gives it a strong predictive power, including a specific prediction for the observed GRB distribution as a function of both the redshift $z$ and the viewing angle $\\theta$ from the jet axis. We show that the current sample of GRBs with known $z$ and estimated $\\theta$ does not agree with the predictions of the USJ model. This can be best seen for a relatively narrow range in $z$, in which the USJ model predicts that most GRBs should be near the upper end of the observed range in $\\theta$, while in the observed sample most GRBs are near the lower end of that range. Since ...

  9. Predicting functional brain ROIs via fiber shape models.

    Science.gov (United States)

    Zhang, Tuo; Guo, Lei; Li, Kaiming; Zhu, Dajing; Cui, Guangbin; Liu, Tianming

    2011-01-01

    Study of structural and functional connectivities of the human brain has received significant interest and effort recently. A fundamental question arises when attempting to measure the structural and/or functional connectivities of specific brain networks: how to best identify possible Regions of Interests (ROIs)? In this paper, we present a novel ROI prediction framework that localizes ROIs in individual brains based on learned fiber shape models from multimodal task-based fMRI and diffusion tensor imaging (DTI) data. In the training stage, ROIs are identified as activation peaks in task-based fMRI data. Then, shape models of white matter fibers emanating from these functional ROIs are learned. In addition, ROIs' location distribution model is learned to be used as an anatomical constraint. In the prediction stage, functional ROIs are predicted in individual brains based on DTI data. The ROI prediction is formulated and solved as an energy minimization problem, in which the two learned models are used as energy terms. Our experiment results show that the average ROI prediction error is 3.45 mm, in comparison with the benchmark data provided by working memory task-based fMRI. Promising results were also obtained on the ADNI-2 longitudinal DTI dataset.

  10. Land-ice modeling for sea-level prediction

    Energy Technology Data Exchange (ETDEWEB)

    Lipscomb, William H [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2010-06-11

    There has been major progress in ice sheet modeling since IPCC AR4. We will soon have efficient higherorder ice sheet models that can run at ",1 km resolution for entire ice sheets, either standalone or coupled to GeMs. These models should significantly reduce uncertainties in sea-level predictions. However, the least certain and potentially greatest contributions to 21st century sea-level rise may come from ice-ocean interactions, especially in West Antarctica. This is a coupled modeling problem that requires collaboration among ice, ocean and atmosphere modelers.

  11. Support vector regression model for complex target RCS predicting

    Institute of Scientific and Technical Information of China (English)

    Wang Gu; Chen Weishi; Miao Jungang

    2009-01-01

    The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing model based on data is established for making up the insufficiency of theoretic models. Based on the "support vector regression method", which is formulated on the principle of minimizing a structural risk, a data model to predicate the unknown radar cross section of some appointed targets is given. Comparison between the actual data and the results of this predicting model based on support vector regression method proved that the support vector regression method is workable and with a comparative precision.

  12. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  13. The QCD/SM working group: Summary report

    Energy Technology Data Exchange (ETDEWEB)

    W. Giele et al.

    2004-01-12

    Quantum Chromo-Dynamics (QCD), and more generally the physics of the Standard Model (SM), enter in many ways in high energy processes at TeV Colliders, and especially in hadron colliders (the Tevatron at Fermilab and the forthcoming LHC at CERN), First of all, at hadron colliders, QCD controls the parton luminosity, which rules the production rates of any particle or system with large invariant mass and/or large transverse momentum. Accurate predictions for any signal of possible ''New Physics'' sought at hadron colliders, as well as the corresponding backgrounds, require an improvement in the control of uncertainties on the determination of PDF and of the propagation of these uncertainties in the predictions. Furthermore, to fully exploit these new types of PDF with uncertainties, uniform tools (computer interfaces, standardization of the PDF evolution codes used by the various groups fitting PDF's) need to be proposed and developed. The dynamics of colour also affects, both in normalization and shape, various observables of the signals of any possible ''New Physics'' sought at the TeV scale, such as, e.g. the production rate, or the distributions in transverse momentum of the Higgs boson. Last, but not least, QCD governs many backgrounds to the searches for this ''New Physics''. Large and important QCD corrections may come from extra hard parton emission (and the corresponding virtual corrections), involving multi-leg and/or multi-loop amplitudes. This requires complex higher order calculations, and new methods have to be designed to compute the required multi-legs and/or multi-loop corrections in a tractable form. In the case of semi-inclusive observables, logarithmically enhanced contributions coming from multiple soft and collinear gluon emission require sophisticated QCD resummation techniques. Resummation is a catch-all name for efforts to extend the predictive power of QCD by summing the large

  14. Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict global radiation

    CERN Document Server

    Voyant, Cyril; Paoli, Christophe; Nivet, Marie Laure

    2012-01-01

    We propose in this paper an original technique to predict global radiation using a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (ALADIN). We particularly look at the Multi-Layer Perceptron. After optimizing our architecture with ALADIN and endogenous data previously made stationary and using an innovative pre-input layer selection method, we combined it to an ARMA model from a rule based on the analysis of hourly data series. This model has been used to forecast the hourly global radiation for five places in Mediterranean area. Our technique outperforms classical models for all the places. The nRMSE for our hybrid model ANN/ARMA is 14.9% compared to 26.2% for the na\\"ive persistence predictor. Note that in the stand alone ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of the forecaster outputs, a complementary study concerning the confidence interval of each prediction is proposed

  15. Predicting human walking gaits with a simple planar model.

    Science.gov (United States)

    Martin, Anne E; Schmiedeler, James P

    2014-04-11

    Models of human walking with moderate complexity have the potential to accurately capture both joint kinematics and whole body energetics, thereby offering more simultaneous information than very simple models and less computational cost than very complex models. This work examines four- and six-link planar biped models with knees and rigid circular feet. The two differ in that the six-link model includes ankle joints. Stable periodic walking gaits are generated for both models using a hybrid zero dynamics-based control approach. To establish a baseline of how well the models can approximate normal human walking, gaits were optimized to match experimental human walking data, ranging in speed from very slow to very fast. The six-link model well matched the experimental step length, speed, and mean absolute power, while the four-link model did not, indicating that ankle work is a critical element in human walking models of this type. Beyond simply matching human data, the six-link model can be used in an optimization framework to predict normal human walking using a torque-squared objective function. The model well predicted experimental step length, joint motions, and mean absolute power over the full range of speeds.

  16. Towards predictive food process models: A protocol for parameter estimation.

    Science.gov (United States)

    Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  17. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...... preprocessing [Dau et al., 1997. J. Acoust. Soc. Am. 102, 2892-2905] with a simple central stage that describes the similarity of the test signal with the corresponding reference signal at a level of the internal representation of the signals. The model was compared with previous approaches, whereby a speech...... in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...

  18. Kondo Breakdown and Quantum Oscillations in SmB_{6}.

    Science.gov (United States)

    Erten, Onur; Ghaemi, Pouyan; Coleman, Piers

    2016-01-29

    Recent quantum oscillation experiments on SmB_{6} pose a paradox, for while the angular dependence of the oscillation frequencies suggest a 3D bulk Fermi surface, SmB_{6} remains robustly insulating to very high magnetic fields. Moreover, a sudden low temperature upturn in the amplitude of the oscillations raises the possibility of quantum criticality. Here we discuss recently proposed mechanisms for this effect, contrasting bulk and surface scenarios. We argue that topological surface states permit us to reconcile the various data with bulk transport and spectroscopy measurements, interpreting the low temperature upturn in the quantum oscillation amplitudes as a result of surface Kondo breakdown and the high frequency oscillations as large topologically protected orbits around the X point. We discuss various predictions that can be used to test this theory.

  19. Model predictive control for a thermostatic controlled system

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Rasmussen, Henrik; Stoustrup, Jakob

    2013-01-01

    This paper proposes a model predictive control scheme to provide temperature set-points to thermostatic controlled cooling units in refrigeration systems. The control problem is formulated as a convex programming problem to minimize the overall operating cost of the system. The foodstuff temperat......This paper proposes a model predictive control scheme to provide temperature set-points to thermostatic controlled cooling units in refrigeration systems. The control problem is formulated as a convex programming problem to minimize the overall operating cost of the system. The foodstuff...

  20. Physical/chemical modeling for photovoltaic module life prediction

    Science.gov (United States)

    Moacanin, J.; Carroll, W. F.; Gupta, A.

    1979-01-01

    The paper presents a generalized methodology for identification and evaluation of potential degradation and failure of terrestrial photovoltaic encapsulation. Failure progression modeling and an interaction matrix are utilized to complement the conventional approach to failure degradation mode identification. Comparison of the predicted performance based on these models can produce: (1) constraints on system or component design, materials or operating conditions, (2) qualification (predicted satisfactory function), and (3) uncertainty. The approach has been applied to an investigation of an unexpected delamination failure; it is being used to evaluate thermomechanical interactions in photovoltaic modules and to study corrosion of contacts and interconnects.

  1. A neural network model for olfactory glomerular activity prediction

    Science.gov (United States)

    Soh, Zu; Tsuji, Toshio; Takiguchi, Noboru; Ohtake, Hisao

    2012-12-01

    Recently, the importance of odors and methods for their evaluation have seen increased emphasis, especially in the fragrance and food industries. Although odors can be characterized by their odorant components, their chemical information cannot be directly related to the flavors we perceive. Biological research has revealed that neuronal activity related to glomeruli (which form part of the olfactory system) is closely connected to odor qualities. Here we report on a neural network model of the olfactory system that can predict glomerular activity from odorant molecule structures. We also report on the learning and prediction ability of the proposed model.

  2. Ensemble ecosystem modeling for predicting ecosystem response to predator reintroduction.

    Science.gov (United States)

    Baker, Christopher M; Gordon, Ascelin; Bode, Michael

    2017-04-01

    Introducing a new or extirpated species to an ecosystem is risky, and managers need quantitative methods that can predict the consequences for the recipient ecosystem. Proponents of keystone predator reintroductions commonly argue that the presence of the predator will restore ecosystem function, but this has not always been the case, and mathematical modeling has an important role to play in predicting how reintroductions will likely play out. We devised an ensemble modeling method that integrates species interaction networks and dynamic community simulations and used it to describe the range of plausible consequences of 2 keystone-predator reintroductions: wolves (Canis lupus) to Yellowstone National Park and dingoes (Canis dingo) to a national park in Australia. Although previous methods for predicting ecosystem responses to such interventions focused on predicting changes around a given equilibrium, we used Lotka-Volterra equations to predict changing abundances through time. We applied our method to interaction networks for wolves in Yellowstone National Park and for dingoes in Australia. Our model replicated the observed dynamics in Yellowstone National Park and produced a larger range of potential outcomes for the dingo network. However, we also found that changes in small vertebrates or invertebrates gave a good indication about the potential future state of the system. Our method allowed us to predict when the systems were far from equilibrium. Our results showed that the method can also be used to predict which species may increase or decrease following a reintroduction and can identify species that are important to monitor (i.e., species whose changes in abundance give extra insight into broad changes in the system). Ensemble ecosystem modeling can also be applied to assess the ecosystem-wide implications of other types of interventions including assisted migration, biocontrol, and invasive species eradication. © 2016 Society for Conservation Biology.

  3. Boolean network model predicts knockout mutant phenotypes of fission yeast.

    Directory of Open Access Journals (Sweden)

    Maria I Davidich

    Full Text Available BOOLEAN NETWORKS (OR: networks of switches are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus.

  4. Boolean Network Model Predicts Knockout Mutant Phenotypes of Fission Yeast

    Science.gov (United States)

    Davidich, Maria I.; Bornholdt, Stefan

    2013-01-01

    Boolean networks (or: networks of switches) are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus. PMID:24069138

  5. Lepton Flavor Violation in Predictive SUSY-GUT Models

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Carl H.; /Northern Illinois U. /Fermilab; Chen, Mu-Chun; /UC, Irvine

    2008-02-01

    There have been many theoretical models constructed which aim to explain the neutrino masses and mixing patterns. While many of the models will be eliminated once more accurate determinations of the mixing parameters, especially sin{sup 2} 2{theta}{sub 13}, are obtained, charged lepton flavor violation (LFV) experiments are able to differentiate even further among the models. In this paper, they investigate various rare LFV processes, such as {ell}{sub i} {yields} {ell}{sub j} + {gamma} and {mu} - e conversion, in five predictive SUSY SO(10) models and their allowed soft SUSY breaking parameter space in the constrained minimal SUSY standard model (CMSSM). Utilizing the WMAP dark matter constraints, they obtain lower bounds on the branching ratios of these rare processes and find that at least three of the five models they consider give rise to predictions for {mu} {yields} e + {gamma} that will be tested by the MEG collaboration at PSI. in addition, the next generation {mu} - e conversion experiment has sensitivity to the predictions of all five models, making it an even more robust way to test these models. While generic studies have emphasized the dependence of the branching ratios of these rare processes on the reactor neutrino angle, {theta}{sub 13}, and the mass of the heaviest right-handed neutrino, M{sub 3}, they find very massive M{sub 3} is more significant than large {theta}{sub 13} in leading to branching ratios near to the present upper limits.

  6. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    Science.gov (United States)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    The aim of the study was to assess the degree of spatial agreement of different predicted patterns in a majority of coherent landslide prediction maps with almost similar success and prediction rate curves. If two or more models have a similar performance, the choice of the best one is not a trivial operation and cannot be based on success and prediction rate curves only. In fact, it may happen that two or more prediction maps with similar accuracy and predictive power do not have the same degree of agreement in terms of spatial predicted patterns. The selected study area is the high Valtellina valley, in North of Italy, covering a surface of about 450 km2 where mapping of historical landslides is available. In order to assess landslide susceptibility, we applied the Weights of Evidence (WofE) modeling technique implemented by USGS by means of ARC-SDM tool. WofE efficiently investigate the spatial relationships among past events and multiple predisposing factors, providing useful information to identify the most probable location of future landslide occurrences. We have carried out 13 distinct experiments by changing the number of morphometric and geo-environmental explanatory variables in each experiment with the same training set and thus generating distinct models of landslide prediction, computing probability degrees of occurrence of landslides in each pixel. Expert knowledge and previous results from indirect statistically-based methods suggested slope, land use, and geology the best "driving controlling factors". The Success Rate Curve (SRC) was used to estimate how much the results of each model fit the occurrence of landslides used for the training of the models. The Prediction Rate Curve (PRC) was used to estimate how much the model predict the occurrence of landslides in the validation set. We found that the performances were very similar for different models. Also the dendrogram of the Cohen's kappa statistic and Principal Component Analysis (PCA) were

  7. Hybrid multiscale modeling and prediction of cancer cell behavior.

    Science.gov (United States)

    Zangooei, Mohammad Hossein; Habibi, Jafar

    2017-01-01

    Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.

  8. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  9. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    Science.gov (United States)

    Edeling, W. N.; Cinnella, P.; Dwight, R. P.

    2014-10-01

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  10. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Edeling, W.N., E-mail: W.N.Edeling@tudelft.nl [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Cinnella, P., E-mail: P.Cinnella@ensam.eu [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Dwight, R.P., E-mail: R.P.Dwight@tudelft.nl [Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands)

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  11. Neural Network Based Model for Predicting Housing Market Performance

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khalafallah

    2008-01-01

    The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.

  12. Neural Network Modeling to Predict Shelf Life of Greenhouse Lettuce

    Directory of Open Access Journals (Sweden)

    Wei-Chin Lin

    2009-04-01

    Full Text Available Greenhouse-grown butter lettuce (Lactuca sativa L. can potentially be stored for 21 days at constant 0°C. When storage temperature was increased to 5°C or 10°C, shelf life was shortened to 14 or 10 days, respectively, in our previous observations. Also, commercial shelf life of 7 to 10 days is common, due to postharvest temperature fluctuations. The objective of this study was to establish neural network (NN models to predict the remaining shelf life (RSL under fluctuating postharvest temperatures. A box of 12 - 24 lettuce heads constituted a sample unit. The end of the shelf life of each head was determined when it showed initial signs of decay or yellowing. Air temperatures inside a shipping box were recorded. Daily average temperatures in storage and averaged shelf life of each box were used as inputs, and the RSL was modeled as an output. An R2 of 0.57 could be observed when a simple NN structure was employed. Since the "future" (or remaining storage temperatures were unavailable at the time of making a prediction, a second NN model was introduced to accommodate a range of future temperatures and associated shelf lives. Using such 2-stage NN models, an R2 of 0.61 could be achieved for predicting RSL. This study indicated that NN modeling has potential for cold chain quality control and shelf life prediction.

  13. A COMPACT MODEL FOR PREDICTING ROAD TRAFFIC NOISE

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi ، M. Abbaspour ، P. Nassiri ، H. Mahjub

    2009-07-01

    Full Text Available Noise is one of the most important sources of pollution in the metropolitan areas. The recognition of road traffic noise as one of the main sources of environmental pollution has led to develop models that enable us to predict noise level from fundamental variables. Traffic noise prediction models are required as aids in the design of roads and sometimes in the assessment of existing, or envisaged changes in, traffic noise conditions. The purpose of this study was to design a prediction road traffic noise model from traffic variables and conditions of transportation in Iran.This paper is the result of a research conducted in the city of Hamadan with the ultimate objective of setting up a traffic noise model based on the traffic conditions of Iranian cities. Noise levels and other variables have been measured in 282 samples to develop a statistical regression model based on A-weighted equivalent noise level for Iranian road condition. The results revealed that the average LAeq in all stations was 69.04± 4.25 dB(A, the average speed of vehicles was 44.57±11.46 km/h and average traffic load was 1231.9 ± 910.2 V/h.The developed model has seven explanatory entrance variables in order to achieve a high regression coefficient (R2=0.901. Comparing means of predicted and measuring equivalent sound pressure level (LAeq showed small difference less than -0.42 dB(A and -0.77 dB(A for Tehran and Hamadan cities, respectively. The suggested road traffic noise model can be effectively used as a decision support tool for predicting equivalent sound pressure level index in the cities of Iran.

  14. Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes

    Science.gov (United States)

    Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.

    2015-12-01

    Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.

  15. 2, 84, 30, 993, 560, 15456, 11962, 261485, . . .: higher dimension operators in the SM EFT

    National Research Council Canada - National Science Library

    Henning, Brian; Lu, Xiaochuan; Melia, Tom; Murayama, Hitoshi

    2017-01-01

    .... In the present work, we use this result to study the standard model effective field theory (SM EFT), determining the content and number of higher dimension operators up to dimension 12, for an arbitrary number of fermion generations...

  16. Predicting Category Intuitiveness with the Rational Model, the Simplicity Model, and the Generalized Context Model

    Science.gov (United States)

    Pothos, Emmanuel M.; Bailey, Todd M.

    2009-01-01

    Naive observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported…

  17. Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts

    Science.gov (United States)

    Balasubramanian, A.; Shamsuddin, R.; Prabhakaran, B.; Sawant, A.

    2017-03-01

    Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%) (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable

  18. The development of U. S. soil erosion prediction and modeling

    Directory of Open Access Journals (Sweden)

    John M. Laflen

    2013-09-01

    Full Text Available Soil erosion prediction technology began over 70 years ago when Austin Zingg published a relationship between soil erosion (by water and land slope and length, followed shortly by a relationship by Dwight Smith that expanded this equation to include conservation practices. But, it was nearly 20 years before this work's expansion resulted in the Universal Soil Loss Equation (USLE, perhaps the foremost achievement in soil erosion prediction in the last century. The USLE has increased in application and complexity, and its usefulness and limitations have led to the development of additional technologies and new science in soil erosion research and prediction. Main among these new technologies is the Water Erosion Prediction Project (WEPP model, which has helped to overcome many of the shortcomings of the USLE, and increased the scale over which erosion by water can be predicted. Areas of application of erosion prediction include almost all land types: urban, rural, cropland, forests, rangeland, and construction sites. Specialty applications of WEPP include prediction of radioactive material movement with soils at a superfund cleanup site, and near real-time daily estimation of soil erosion for the entire state of Iowa.

  19. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    Science.gov (United States)

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  20. Markov Model Predicts Changes in STH Prevalence during Control Activities Even with a Reduced Amount of Baseline Information.

    Directory of Open Access Journals (Sweden)

    Antonio Montresor

    2016-04-01

    Full Text Available Estimating the reduction in levels of infection during implementation of soil-transmitted helminth (STH control programmes is important to measure their performance and to plan interventions. Markov modelling techniques have been used with some success to predict changes in STH prevalence following treatment in Viet Nam. The model is stationary and to date, the prediction has been obtained by calculating the transition probabilities between the different classes of intensity following the first year of drug distribution and assuming that these remain constant in subsequent years. However, to run this model longitudinal parasitological data (including intensity of infection are required for two consecutive years from at least 200 individuals. Since this amount of data is not often available from STH control programmes, the possible application of the model in control programme is limited. The present study aimed to address this issue by adapting the existing Markov model to allow its application when a more limited amount of data is available and to test the predictive capacities of these simplified models.We analysed data from field studies conducted with different combination of three parameters: (i the frequency of drug administration; (ii the drug distributed; and (iii the target treatment population (entire population or school-aged children only. This analysis allowed us to define 10 sets of standard transition probabilities to be used to predict prevalence changes when only baseline data are available (simplified model 1. We also formulated three equations (one for each STH parasite to calculate the predicted prevalence of the different classes of intensity from the total prevalence. These equations allowed us to design a simplified model (SM2 to obtain predictions when the classes of intensity at baseline were not known. To evaluate the performance of the simplified models, we collected data from the scientific literature on changes in

  1. Optimal model-free prediction from multivariate time series.

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  2. Risk models to predict hypertension: a systematic review.

    Directory of Open Access Journals (Sweden)

    Justin B Echouffo-Tcheugui

    Full Text Available BACKGROUND: As well as being a risk factor for cardiovascular disease, hypertension is also a health condition in its own right. Risk prediction models may be of value in identifying those individuals at risk of developing hypertension who are likely to benefit most from interventions. METHODS AND FINDINGS: To synthesize existing evidence on the performance of these models, we searched MEDLINE and EMBASE; examined bibliographies of retrieved articles; contacted experts in the field; and searched our own files. Dual review of identified studies was conducted. Included studies had to report on the development, validation, or impact analysis of a hypertension risk prediction model. For each publication, information was extracted on study design and characteristics, predictors, model discrimination, calibration and reclassification ability, validation and impact analysis. Eleven studies reporting on 15 different hypertension prediction risk models were identified. Age, sex, body mass index, diabetes status, and blood pressure variables were the most common predictor variables included in models. Most risk models had acceptable-to-good discriminatory ability (C-statistic>0.70 in the derivation sample. Calibration was less commonly assessed, but overall acceptable. Two hypertension risk models, the Framingham and Hopkins, have been externally validated, displaying acceptable-to-good discrimination, and C-statistic ranging from 0.71 to 0.81. Lack of individual-level data precluded analyses of the risk models in subgroups. CONCLUSIONS: The discrimination ability of existing hypertension risk prediction tools is acceptable, but the impact of using these tools on prescriptions and outcomes of hypertension prevention is unclear.

  3. A prediction model for ocular damage - Experimental validation.

    Science.gov (United States)

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Cs3Sm7Se12

    Directory of Open Access Journals (Sweden)

    Christof Schneck

    2012-01-01

    Full Text Available The title compound, tricaesium heptasamarium(III dodecaselenide, is setting a new starting point for realization of the channel structure of the Cs3M7Se12 series, now with M = Sm, Gd–Er. This Cs3Y7Se12-type arrangement is structurally based on the Z-type sesquiselenides M2Se3 adopting the Sc2S3 structure. Thus, the structural set-up of Cs3Sm7Se12 consists of edge- and vertex-connected [SmSe6]9− octahedra [dØ(Sm3+ – Se2− = 2.931 Å], forming a rock-salt-related network [Sm7Se12]3− with channels along [001] that are apt to take up monovalent cations (here Cs+ with coordination numbers of 7 + 1 for one and of 6 for the second cation. The latter cation has a trigonal–prismatic coordination and shows half-occupancy, resulting in an impossible short distance [2.394 (4 Å] between symmetrically coupled Cs+ cations of the same kind. While one Sm atom occupies Wyckoff position 2b with site symmetry ..2/m, all other 11 crystallographically different atoms (namely 2 × Cs, 3 × Sm and 6 × Se are located at Wyckoff positions 4g with site symmetry ..m.

  5. Cs(3)Sm(7)Se(12).

    Science.gov (United States)

    Schneck, Christof; Elbe, Andreas; Schurz, Christian M; Schleid, Thomas

    2012-01-01

    The title compound, tricaesium hepta-samarium(III) dodeca-selenide, is setting a new starting point for realization of the channel structure of the Cs(3)M(7)Se(12) series, now with M = Sm, Gd-Er. This Cs(3)Y(7)Se(12)-type arrangement is structurally based on the Z-type sesquiselenides M(2)Se(3) adopting the Sc(2)S(3) structure. Thus, the structural set-up of Cs(3)Sm(7)Se(12) consists of edge- and vertex-connected [SmSe(6)](9-) octa-hedra [d(Ø)(Sm(3+) - Se(2-)) = 2.931 Å], forming a rock-salt-related network [Sm(7)Se(12)](3-) with channels along [001] that are apt to take up monovalent cations (here Cs(+)) with coordination numbers of 7 + 1 for one and of 6 for the second cation. The latter cation has a trigonal-prismatic coordination and shows half-occupancy, resulting in an impossible short distance [2.394 (4) Å] between symmetrically coupled Cs(+) cations of the same kind. While one Sm atom occupies Wyckoff position 2b with site symmetry ..2/m, all other 11 crystallographically different atoms (namely 2 × Cs, 3 × Sm and 6 × Se) are located at Wyckoff positions 4g with site symmetry ..m.

  6. MULTIVARIATE MODEL FOR CORPORATE BANKRUPTCY PREDICTION IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Daniel BRÎNDESCU – OLARIU

    2016-06-01

    Full Text Available The current paper proposes a methodology for bankruptcy prediction applicable for Romanian companies. Low bankruptcy frequencies registered in the past have limited the importance of bankruptcy prediction in Romania. The changes in the economic environment brought by the economic crisis, as well as by the entrance in the European Union, make the availability of performing bankruptcy assessment tools more important than ever before. The proposed methodology is centred on a multivariate model, developed through discriminant analysis. Financial ratios are employed as explanatory variables within the model. The study has included 53,252 yearly financial statements from the period 2007 – 2010, with the state of the companies being monitored until the end of 2012. It thus employs the largest sample ever used in Romanian research in the field of bankruptcy prediction, not targeting high levels of accuracy over isolated samples, but reliability and ease of use over the entire population.

  7. Predictive Model of Energy Consumption in Beer Production

    Directory of Open Access Journals (Sweden)

    Tiecheng Pu

    2013-07-01

    Full Text Available The predictive model of energy consumption is presented based on subtractive clustering and Adaptive-Network-Based Fuzzy Inference System (for short ANFIS in the beer production. Using the subtractive clustering on the historical data of energy consumption, the limit of artificial experience is conquered while confirming the number of fuzzy rules. The parameters of the fuzzy inference system are acquired by the structure of adaptive network and hybrid on-line learning algorithm. The method can predict and guide the energy consumption of the factual production process. The reducing consumption scheme is provided based on the actual situation of the enterprise. Finally, using concrete examples verified the feasibility of this method comparing with the Radial Basis Functions (for short RBF neural network predictive model.

  8. The Next Page Access Prediction Using Makov Model

    Directory of Open Access Journals (Sweden)

    Deepti Razdan

    2011-09-01

    Full Text Available Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentialdata decreases recommendation accuracy. In thispaper involve incorporating clustering with low order markovmodel which can improve the prediction accuracy. The main areaof research in this paper is pre processing and identification ofuseful patterns from web data using mining techniques with thehelp of open source software.

  9. Nonlinear turbulence models for predicting strong curvature effects

    Institute of Scientific and Technical Information of China (English)

    XU Jing-lei; MA Hui-yang; HUANG Yu-ning

    2008-01-01

    Prediction of the characteristics of turbulent flows with strong streamline curvature, such as flows in turbomachines, curved channel flows, flows around airfoils and buildings, is of great importance in engineering applicatious and poses a very practical challenge for turbulence modeling. In this paper, we analyze qualitatively the curvature effects on the structure of turbulence and conduct numerical simulations of a turbulent U- duct flow with a number of turbulence models in order to assess their overall performance. The models evaluated in this work are some typical linear eddy viscosity turbulence models, nonlinear eddy viscosity turbulence models (NLEVM) (quadratic and cubic), a quadratic explicit algebraic stress model (EASM) and a Reynolds stress model (RSM) developed based on the second-moment closure. Our numerical results show that a cubic NLEVM that performs considerably well in other benchmark turbulent flows, such as the Craft, Launder and Suga model and the Huang and Ma model, is able to capture the major features of the highly curved turbulent U-duct flow, including the damping of turbulence near the convex wall, the enhancement of turbulence near the concave wall, and the subsequent turbulent flow separation. The predictions of the cubic models are quite close to that of the RSM, in relatively good agreement with the experimental data, which suggests that these inodels may be employed to simulate the turbulent curved flows in engineering applications.

  10. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  11. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  12. Maxent modelling for predicting the potential distribution of Thai Palms

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach

    2011-01-01

    Increasingly species distribution models are being used to address questions related to ecology, biogeography and species conservation on global and regional scales. We used the maximum entropy approach implemented in the MAXENT programme to build a habitat suitability model for Thai palms based...... on presence data. The aim was to identify potential hot spot areas, assess the determinants of palm distribution ranges, and provide a firmer knowledge base for future conservation actions. We focused on a relatively small number of climatic, environmental and spatial variables in order to avoid...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...

  13. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis....... While use of experimentally measured values of the needed properties is desirable in these tasks, the experimental data of the properties of interest may not be available or may not be measurable in many cases. Therefore, property models that are reliable, predictive and easy to use are necessary....... However, which models should be used to provide the reliable estimates of the required properties? And, how much measured data is necessary to regress the model parameters? How to ensure predictive capabilities in the developed models? Also, as it is necessary to know the associated uncertainties...

  14. Predictions of titanium alloy properties using thermodynamic modeling tools

    Science.gov (United States)

    Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.

    2005-12-01

    Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.

  15. Model for performance prediction in multi-axis machining

    CERN Document Server

    Lavernhe, Sylvain; Lartigue, Claire; 10.1007/s00170-007-1001-4

    2009-01-01

    This paper deals with a predictive model of kinematical performance in 5-axis milling within the context of High Speed Machining. Indeed, 5-axis high speed milling makes it possible to improve quality and productivity thanks to the degrees of freedom brought by the tool axis orientation. The tool axis orientation can be set efficiently in terms of productivity by considering kinematical constraints resulting from the set machine-tool/NC unit. Capacities of each axis as well as some NC unit functions can be expressed as limiting constraints. The proposed model relies on each axis displacement in the joint space of the machine-tool and predicts the most limiting axis for each trajectory segment. Thus, the calculation of the tool feedrate can be performed highlighting zones for which the programmed feedrate is not reached. This constitutes an indicator for trajectory optimization. The efficiency of the model is illustrated through examples. Finally, the model could be used for optimizing process planning.

  16. Probabilistic Modeling of Fatigue Damage Accumulation for Reliability Prediction

    Directory of Open Access Journals (Sweden)

    Vijay Rathod

    2011-01-01

    Full Text Available A methodology for probabilistic modeling of fatigue damage accumulation for single stress level and multistress level loading is proposed in this paper. The methodology uses linear damage accumulation model of Palmgren-Miner, a probabilistic S-N curve, and an approach for a one-to-one transformation of probability density functions to achieve the objective. The damage accumulation is modeled as a nonstationary process as both the expected damage accumulation and its variability change with time. The proposed methodology is then used for reliability prediction under single stress level and multistress level loading, utilizing dynamic statistical model of cumulative fatigue damage. The reliability prediction under both types of loading is demonstrated with examples.

  17. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Directory of Open Access Journals (Sweden)

    Isabel M D Rosa

    Full Text Available Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1 it is probabilistic and quantifies uncertainty around predictions and parameters; (2 the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3 deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia"-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy, annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is

  18. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  19. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  20. Preparation, microstructure and magnetic properties of Sm(Co,Hf){sub 7}/Co nanocomposite particles by polyol method

    Energy Technology Data Exchange (ETDEWEB)

    Bu, Shao-Jing; Duan, Xiu-Li; Han, Xu-Hao; Sun, Ji-Bing, E-mail: hbgdsjb@126.com; Chi, Xiang; Cui, Chun-Xiang

    2017-02-01

    Hard/soft Sm-Co/Co nanocomposite particles were prepared by reducing CoCl{sub 2}·6H{sub 2}O in the solution containing ball-milled Sm(Co, Hf){sub 7} particles by a simple polyol method with ethylene glycol as the solvent. Phase composition, microstructure and magnetic properties of the particles were analyzed by XRD, TEM (HRTEM) and VSM, respectively. It has been found that Sm-Co/Co core/shell structure is formed in which the Co shell is 3–5 nm in thickness and mainly exists in hcp-Co phase. At the same time, fcc-Co tends to nucleate and grow independently between Sm-Co particles. The formation mechanism of Sm-Co/Co composite particles is discussed and corresponding model is established. Sm-Co/Co composite particles perform obvious remanence enhancement effects especially after being heated at 450 °C for 15 min.