Optical properties of indium phosphide nanowire ensembles at various temperatures
Lohn, Andrew J; Onishi, Takehiro; Kobayashi, Nobuhiko P [Baskin School of Engineering, University of California Santa Cruz, Santa Cruz, CA 95064 (United States); Nanostructured Energy Conversion Technology and Research (NECTAR), Advanced Studies Laboratories, University of California Santa Cruz-NASA Ames Research Center, Moffett Field, CA 94035 (United States)
2010-09-03
Ensembles that contain two types (zincblende and wurtzite) of indium phosphide nanowires grown on non-single crystalline surfaces were studied by micro-photoluminescence and micro-Raman spectroscopy at various low temperatures. The obtained spectra are discussed with the emphasis on the effects of differing lattice types, geometries, and crystallographic orientations present within an ensemble of nanowires grown on non-single crystalline surfaces. In the photoluminescence spectra, a typical Varshni dependence of band gap energy on temperature was observed for emissions from zincblende nanowires and in the high temperature regime energy transfer from excitonic transitions and band-edge transitions was identified. In contrast, the photoluminescence emissions associated with wurtzite nanowires were rather insensitive to temperature. Raman spectra were collected simultaneously from zincblende and wurtzite nanowires coexisting in an ensemble. Raman peaks of the wurtzite nanowires are interpreted as those related to the zincblende nanowires by a folding of the phonon dispersion.
Combining 2-m temperature nowcasting and short range ensemble forecasting
A. Kann
2011-12-01
Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous
Precipitation and temperature ensemble forecasts from single-value forecasts
J. Schaake
2007-04-01
Full Text Available A procedure is presented to construct ensemble forecasts from single-value forecasts of precipitation and temperature. This involves dividing the spatial forecast domain and total forecast period into a number of parts that are treated as separate forecast events. The spatial domain is divided into hydrologic sub-basins. The total forecast period is divided into time periods, one for each model time step. For each event archived values of forecasts and corresponding observations are used to model the joint distribution of forecasts and observations. The conditional distribution of observations for a given single-value forecast is used to represent the corresponding probability distribution of events that may occur for that forecast. This conditional forecast distribution subsequently is used to create ensemble members that vary in space and time using the "Schaake Shuffle" (Clark et al, 2004. The resulting ensemble members have the same space-time patterns as historical observations so that space-time joint relationships between events that have a significant effect on hydrological response tend to be preserved.
Forecast uncertainty is space and time-scale dependent. For a given lead time to the beginning of the valid period of an event, forecast uncertainty depends on the length of the forecast valid time period and the spatial area to which the forecast applies. Although the "Schaake Shuffle" procedure, when applied to construct ensemble members from a time-series of single value forecasts, may preserve some of this scale dependency, it may not be sufficient without additional constraint. To account more fully for the time-dependent structure of forecast uncertainty, events for additional "aggregate" forecast periods are defined as accumulations of different "base" forecast periods.
The generated ensemble members can be ingested by an Ensemble Streamflow Prediction system to produce ensemble forecasts of streamflow and other
Towards constraining extreme temperature projections of the CMIP5 ensemble
Vogel, Martha-Marie; Orth, René; Isabelle Seneviratne, Sonia
2016-04-01
The frequency and intensity of heat waves is expected to change in future in response to global warming. Given the severe impacts of heat waves on ecosystems and society it is important to understand how and where they will intensify. Projections of extreme hot temperatures in the IPCC AR5 model ensemble show large uncertainties for projected changes of extreme temperatures in particular in Central Europe. In this region land-atmosphere coupling can contribute substantially to the development of heat waves. This coupling is also subject to change in future, while model projections display considerable spread. In this work we link projections of changes in extreme temperatures and of changes in land-atmosphere interactions with a particular focus on Central Europe. Uncertainties in projected extreme temperatures can be partly explained by different projected changes of the interplay between latent heat and temperature as well as soil moisture. Given the considerable uncertainty in land-atmosphere coupling representation already in the current climate, we furthermore employ observational data sets to constrain the model ensemble, and consequently the extreme temperature projections.
Probabilistic Ensemble Forecast of Summertime Temperatures in Pakistan
Muhammad Hanif
2014-01-01
Full Text Available Snowmelt flooding triggered by intense heat is a major temperature related weather hazard in northern Pakistan, and the frequency of such extreme flood events has increased during the recent years. In this study, the probabilistic temperature forecasts at seasonal and subseasonal time scales based on hindcasts simulations from three state-of-the-art models within the DEMETER project are assessed by the relative operating characteristic (ROC verification method. Results based on direct model outputs reveal significant skill for hot summers in February 3–5 (ROC area=0.707 with lower 95% confidence limit of 0.538 and February 4-5 (ROC area=0.771 with lower 95% confidence limit of 0.623 forecasts when validated against observations. Results for ERA-40 reanalysis also show skill for hot summers. Skilful probabilistic ensemble forecasts of summertime temperatures may be valuable in providing the foreknowledge of snowmelt flooding and water management in Pakistan.
Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature
National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...
Seong, Min-Gyu; Suh, Myoung-Seok; Kim, Chansoo
2017-08-01
This study focuses on an objective comparison of eight ensemble methods using the same data, training period, training method, and validation period. The eight ensemble methods are: BMA (Bayesian Model Averaging), HMR (Homogeneous Multiple Regression), EMOS (Ensemble Model Output Statistics), HMR+ with positive coefficients, EMOS+ with positive coefficients, PEA_ROC (Performance-based Ensemble Averaging using ROot mean square error and temporal Correlation coefficient), WEA_Tay (Weighted Ensemble Averaging based on Taylor's skill score), and MME (Multi-Model Ensemble). Forty-five years (1961-2005) of data from 14 CMIP5 models and APHRODITE (Asian Precipitation- Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources) data were used to compare the performance of the eight ensemble methods. Although some models underestimated the variability of monthly mean temperature (MMT), most of the models effectively simulated the spatial distribution of MMT. Regardless of training periods and the number of ensemble members, the prediction skills of BMA and the four multiple linear regressions (MLR) were superior to the other ensemble methods (PEA_ROC, WEA_Tay, MME) in terms of deterministic prediction. In terms of probabilistic prediction, the four MLRs showed better prediction skills than BMA. However, the differences among the four MLRs and BMA were not significant. This resulted from the similarity of BMA weights and regression coefficients. Furthermore, prediction skills of the four MLRs were very similar. Overall, the four MLRs showed the best prediction skills among the eight ensemble methods. However, more comprehensive work is needed to select the best ensemble method among the numerous ensemble methods.
Flood forecast sensitivity to temperature using ECMWF ensembles for 145 catchments in Norway
Jahr Hegdahl, Trine; Engeland, Kolbjørn; Grønbech, Bård Johan; Steinsland, Ingelin; Merete Tallaksen, Lena
2017-04-01
The Norwegian flood forecasting service is based on a flood-forecasting model run on 145 basins. The basins are located all across Norway and differ in both size and hydrological regime. Current flood forecasting system is based on deterministic meteorological forecasts, and uses an auto-regressive procedure to achieve probabilistic forecasts. An alternative approach is to use meteorological and hydrological ensemble forecasts to quantify the uncertainty in forecasted streamflow. The aim of our study is to establish and assess the performance of both meteorological and hydrological ensembles for 145 catchments in Norway, which differ in size, elevation and hydrological regime. We identify regional differences and improvements in performance for preprocessed meteorological forecasts. A separate study further investigates the sensitivity to forecasted temperature for specific snowmelt induced floods. In Norway, snowmelt and combined rain and snowmelt floods are frequent. Hence, temperature is important for correct calculations of snowmelt. Temperature and precipitation ensembles are derived from ECMWF covering a period of nearly three years (01.03.2013 to 31.12.2015). To improve the spread and reduce bias we used standard methods provided by the Norwegian Meteorological Institute. Precipitation is corrected applying a zero-adjusted gamma distribution method (correcting the spread), and temperature is bias corrected using a quantile-quantile mapping (using Hirlam (RCM) 5 km temperature grid as a reference). Observed temperature and precipitation data are station data for all of Norway, interpolated to a 1×1 km2 grid (SeNorge.no). Streamflow observations are available from the NVE database. The hydrological model is the flood-forecasting operational HBV model, run with daily catchment average values. The results show that the methods applied to meteorological ensemble data reduce the cold bias present in the ECMWF temperature ensembles. Catchments on the western coast
Temperature Dependence of Photoluminescence from Single and Ensemble InAs/GaAs Quantum Dots
DOU Xiu-Ming; SUN Bao-Quan; XIONG Yong-Hua; HUANG She-Song; NI Hai-Qiao; NIU Zhi-Chuan
2008-01-01
We investigate the temperature dependence of photoluminescence from single and ensemble InAs/GaAs quantum dots systematically. As temperature increases, the exciton emission peak for single quantum dot shows broadening and redshift. For ensemble quantum dots, however, the exciton emission peak shows narrowing and fast redshift.We use a simple steady-state rate equation model to simulate the experimental data of photoluminescence spectra.It is confirmed that carrier-phonon scattering gives the broadening of the exciton emission peak in single quantum dots while the effects of carrier thermal escape and retrapping play an important role in the narrowing and fast redshift of the exciton emission peak in ensemble quantum dots.
Baran, Sándor; Möller, Annette
2017-02-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
Multi-criterion model ensemble of CMIP5 surface air temperature over China
Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming
2017-05-01
The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the
Castruccio, Stefano
2015-04-02
One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific data sets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a non-trivial model to a data set of one billion data points with a covariance matrix comprising of 10^18 entries.
Greybush, Steven J.; Wilson, R. John; Hoffman, Ross N.; Hoffman, Matthew J.; Miyoshi, Takemasa; Ide, Kayo; McConnochie, Timothy; Kalnay, Eugenia
2012-11-01
Thermal Emission Spectrometer (TES) retrieved temperature profiles are assimilated into the GFDL Mars Global Climate Model (MGCM) using the Local Ensemble Transform Kalman Filter (LETKF) to produce synoptic maps of temperature, winds, and surface pressure and their uncertainties over the course of a Martian year. Short-term (0.25 sol) forecasts compared to independent observations show reduced root mean square error (to 3-4 K global RMSE for a 30-sol evaluation period during the northern hemisphere autumn) and bias compared to a free running model. Several enhanced techniques result in further performance gains. A 4D-LETKF considers observations at their correct hour of occurrence rather than every 6 h. Spatially varying adaptive inflation and varying the dust distribution among ensemble members refine estimates of analysis uncertainty through the ensemble spread. Enhancing dust and water ice aerosol schemes and the application of empirical bias correction using time mean analysis increments help account for model biases. Full-year experiments using prescribed dust opacities and observed TES dust opacities show that while realistic dust distributions are essential to match observed temperatures with a free run simulation, analyses from data assimilation are more robust with respect to imperfections in aerosol distribution. The data assimilation system described here is being used to generate a new reanalysis of Mars weather and climate, which will have many scientific and engineering applications.
Measurement of the temperature of atomic ensembles via \\emph{which-way} information
León-Montiel, R de J
2011-01-01
We unveil the relationship existing between the temperature of an atomic ensemble of three-level atoms in a $\\Lambda$-configuration, and the width of the emission cone of Stokes photons that are spontaneously emitted when atoms are excited by an optical beam. This relationship, which is based on the amount of \\emph{which-way} information available about where the Stokes photon originated during the interaction, allow us to put forward a new scheme to determine the temperature of atomic clouds by measuring the width of the emission cone. Unlike the commonly used time-of-flight measurements, with this new technique, the atomic cloud is not destroyed during each measurement.
Marino, Ricardo; Majumdar, Satya N.; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let Pβ(V )(NI) be the probability that a N ×N β -ensemble of random matrices with confining potential V (x ) has NI eigenvalues inside an interval I =[a ,b ] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically Pβ(V )(NI) for large N . We show that this probability scales for large N as Pβ(V )(NI) ≈exp[-β N2ψ(V )(NI/N ) ] , where β is the Dyson index of the ensemble. The rate function ψ(V )(kI) , independent of β , is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β -Gaussian (I =[-L ,L ] ), β -Wishart (I =[1 ,L ] ), and β -Cauchy (I =[-L ,L ] ) ensembles. Expanding the rate function around its minimum, we find that generically the number variance v a r (NI) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
Electronic chemical response indexes at finite temperature in the canonical ensemble
Franco-Pérez, Marco, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx; Gázquez, José L., E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, México, D. F. 09340, México (Mexico); Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico); Vela, Alberto, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico)
2015-07-14
Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures.
Dong, Jianzhi; Steele-Dunne, Susan C.; Ochsner, Tyson E.; van de Giesen, Nick
2015-12-01
This study investigates the potential to estimate the vertical profile of soil moisture by assimilating temperature observations at a limited number of depths into a coupled heat and moisture transport model (Hydrus-1D). The method is developed with a view to assimilating temperature data from distributed temperature sensing (DTS) to estimate soil moisture at high resolution over large areas. The correlation between temperature and soil moisture in the shallow soil (top ∼ 50 cm) ensures that soil moisture can be estimated using just soil temperature observations. Synthetic tests across a range of soil textures show that with data assimilation both modeled temperature and the moisture profile are improved considerably compared to the ensemble open loop model simulations. In addition, employing data assimilation provides a means to quantitatively account for different sources of uncertainty. This is particularly relevant in the context of DTS given the influence of spatial variability in soil texture and its impact on estimation error. The data assimilation approach could also be used to determine, the number of temperature observations required and the depths at which they should be made. Results suggest that temperature observed at two depths is typically sufficient to estimate soil moisture using this approach. The root mean square error (RMSE) in soil moisture was reduced by up to 75% in the top 20 cm. Furthermore, this approach solves many of the challenges identified in the application of an inversion approach to estimate soil moisture from DTS.
Paolo Reggiani
2016-06-01
Full Text Available The Upper Indus Basin (UIB and the Karakoram Range are the subject of ongoing hydro-glaciological studies to investigate possible glacier mass balance shifts due to climatic change. Because of the high altitude and remote location, the Karakoram Range is difficult to access and, therefore, remains scarcely monitored. In situ precipitation and temperature measurements are only available at valley locations. High-altitude observations exist only for very limited periods. Gridded precipitation and temperature data generated from the spatial interpolation of in situ observations are unreliable for this region because of the extreme topography. Besides satellite measurements, which offer spatial coverage, but underestimate precipitation in this area, atmospheric reanalyses remain one of the few alternatives. Here, we apply a proven approach to quantify the uncertainty associated with an ensemble of monthly precipitation and temperature reanalysis data for 1979–2009 in Shigar Basin, Central Karakoram. A Model-Conditional Processor (MCP of uncertainty is calibrated on precipitation and temperature in situ data measured in the proximity of the study region. An ensemble of independent reanalyses is processed to determine the predictive uncertainty of monthly observations. As to be expected, the informative gain achieved by post-processing temperature reanalyses is considerable, whereas significantly less gain is achieved for precipitation post-processing. The proposed approach indicates a systematic assessment procedure for predictive uncertainty through probabilistic weighting of multiple re-forecasts, which are bias-corrected on ground observations. The approach also supports an educated reconstruction of gap-filling for missing in situ observations.
Kjellström, Erik; Nikulin, Grigory; Rana, Arun; Fuentes Franco, Ramón
2017-04-01
In this study we investigate possible changes in temperature and precipitation on a regional scale over South America from 1961 to 2100. We use data from two ensembles of climate simulations, one global and one regional, over the South America CORDEX domain. The global ensemble includes ten coupled atmosphere ocean general circulation models (AOGCMs) from the CMIP5 project with horizontal resolution varying from about 1° to 3°, namely CanESM2, CSIRO-Mk3, CNRM-CM5, HadGEM2-ES, NorESM1-M, EC-EARTH, MIROC5, GFDL-ESM2M, MPI-ESM-LR and NorESM1-M. In the regional ensemble all 10 AOGCMs are downscaled at the Rossby Centre (SMHI) by a regional climate model - RCA4 at 0.44° resolution. Three forcing scenarios are considered: RCP2.6 (five out of ten AOGCMs); RCP4.5 and RCP8.5. The experimental setup allows us to illustrate how uncertainties in future climate change are related to forcing scenario and to forcing AOGCM at different time periods. Further, taking both AOGCM and RCM ensembles and focusing on seasonal mean temperature and precipitation over South America we i) evaluate the ability of the ensembles and their individual members to simulate the observed climatology in South America, ii) analyse similarities and differences in future climate projections between the two ensembles and iii) assess how both ensembles capture the spread of the grand CMIP5 ensemble. We also address higher-order variability by showing results for changes in temperature extremes and for changes in intensity and frequency of extreme precipitation.
Zhang, Zhigang; Duan, Zhenhao
2002-10-01
A new technique of temperature scaling method combined with the conventional Gibbs Ensemble Monte Carlo simulation was used to study liquid-vapor phase equilibria of the methane-ethane (CH 4-C 2H 6) system. With this efficient method, a new set of united-atom Lennard-Jones potential parameters for pure C 2H 6 was found to be more accurate than those of previous models in the prediction of phase equilibria. Using the optimized potentials for liquid simulations (OPLS) potential for CH 4 and the potential of this study for C 2H 6, together with a simple mixing rule, we simulated the equilibrium compositions and densities of the CH 4-C 2H 6 mixtures with accuracy close to experiments. The simulated data are supplements to experiments, and may cover a larger temperature-pressure-composition space than experiments. Compared with some well-established equations of state such as Peng-Robinson equation of state (PR-EQS), the simulated results are found to be closer to experiments, at least in some temperature and pressure ranges.
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli
2016-01-01
To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.
Ahn, Joong-Bae; Lee, Joonlee
2016-08-01
A new multimodel ensemble (MME) method that uses a genetic algorithm (GA) is developed and applied to the prediction of winter surface air temperature (SAT) and precipitation. The GA based on the biological process of natural evolution is a nonlinear method which solves nonlinear optimization problems. Hindcast data of winter SAT and precipitation from the six coupled general circulation models participating in the seasonal MME prediction system of the Asia-Pacific Economic Conference Climate Center are used. Three MME methods using GA (MME/GAs) are examined in comparison with a simple composite MME strategy (MS0): MS1 which applies GA to single-model ensembles (SMEs), MS2 which applies GA to each ensemble member and then performs a simple composite method for MME, and MS3 which applies GA to both MME and SME. MS3 shows the highest predictability compared to MS0, MS1, and MS2 for both winter SAT and precipitation. These results indicate that biases of ensemble members of each model and model ensemble are more reduced with MS3 than with other MME/GAs and MS0. The predictability of the MME/GAs shows a greater improvement than that of MS0, particularly in higher-latitude land areas. The reason for the more improved increase of predictability over the land area, particularly in MS3, seems to be the fact that GA is more efficient in finding an optimum solution in a complex region where nonlinear physical properties are evident.
Keppenne, Christian L.
2013-01-01
A two-step ensemble recentering Kalman filter (ERKF) analysis scheme is introduced. The algorithm consists of a recentering step followed by an ensemble Kalman filter (EnKF) analysis step. The recentering step is formulated such as to adjust the prior distribution of an ensemble of model states so that the deviations of individual samples from the sample mean are unchanged but the original sample mean is shifted to the prior position of the most likely particle, where the likelihood of each particle is measured in terms of closeness to a chosen subset of the observations. The computational cost of the ERKF is essentially the same as that of a same size EnKF. The ERKF is applied to the assimilation of Argo temperature profiles into the OGCM component of an ensemble of NASA GEOS-5 coupled models. Unassimilated Argo salt data are used for validation. A surprisingly small number (16) of model trajectories is sufficient to significantly improve model estimates of salinity over estimates from an ensemble run without assimilation. The two-step algorithm also performs better than the EnKF although its performance is degraded in poorly observed regions.
Karspeck, A. R.; Sain, S.; Kaplan, A.
2008-12-01
the climate research community via the internet. The presentation of an ensemble of possible realizations of sea surface temperature is especially important in data-poor regions of the ocean. It is a natural consequence of Bayesian inference that the expected value of the reconstruction in unobserved areas will relax towards the mean of the prior distribution. When considered outside the context of the full covariance information, data users can falsely interpret these places in the data record as less energetic; a proper interpretation, in contrast, would be that there is little constraint on the possible states of the system. The sparse and irregular nature of this historical data thus makes ensemble presentation an important contribution to the research community. For demonstration purposes, this work focuses on a limited domain in the mid-latitude Atlantic. However, the method employed here can be extended to global reconstructions.
Castruccio, Stefano
2016-01-01
One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific datasets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a nontrivial model to a dataset of 1 billion data points with a covariance matrix comprising of 10^{18} entries. Supplementary materials for this article are available online.
Lee, C.; Richardson, M. I.
2010-12-01
Direct observations of the Martian atmosphere are used to constrain the evolution of a Martian General Circulation Model (MarsWRF) using an ensemble Kalman filter data assimilation framework (DART). We use radiance observations from the Thermal Emission Spectrometer (TES) and temperature profiles from TES and the Mars Climate Sounder (MCS) to constrain the evolution of the simulated Martian atmosphere during similar seasons of each mission. We describe the observations being ingested into the model and the preprocessing necessary to ingest these observations efficiently and accurately into the assimilation system. We test the sensitivity of the assimilation system by including surface visual albedo and infra-red emissivity, and atmospheric total dust loading, in the state vector. We allow DART to modify these unobserved state vector components using only the temperature or radiance observations and information gained from the ensemble of simulated circulations. Finally, we identify and discuss the biases and model limitations revealed by the assimilation, and describe the modifications made to the GCM to improve its ensemble mean skill (accuracy) and ensemble variance to better assimilate the available observations.
Dosio, Alessandro
2017-07-01
The most severe effects of global warning will be related to the frequency and severity of extreme events. We provide an analysis of projections of temperature and related extreme events for Africa based on a large ensemble of Regional Climate Models from the COordinated Regional climate Downscaling EXperiment (CORDEX). Results are presented not only by means of widely used indices but also with a recently developed Heat Wave Magnitude Index-daily (HWMId), which takes into account both heat wave duration and intensity. Results show that under RCP8.5, warming of more than 3.5 °C is projected in JFM over most of the continent, whereas in JAS temperatures over large part of Northern Africa, the Sahara and the Arabian peninsula are projected to increase up to 6 °C. Large increase in in the number of warm days (Tx90p) is found over sub equatorial Africa, with values up to more than 90 % in JAS, and more than 80 % in JFM over e.g., the gulf of Guinea, Central African Republic, South Sudan and Ethiopia. Changes in Tn90p (warm nights) are usually larger, with some models projecting Tn90p reaching 95 % starting from around 2060 even under RCP4.5 over the Gulf of Guinea and the Sahel. Results also show that the total length of heat spells projected to occur normally (i.e. once every 2 years) under RCP8.5 may be longer than those occurring once every 30 years under the lower emission scenario. By employing the recently developed HWMId index, it is possible to investigate the relationship between heat wave length ad intensity; in particular it is shown that very intense heat waves such as that occurring over the Horn of Africa may have values of HWMId larger than that of longer, but relatively weak, heat waves over West Africa.
JIA BingHao; XIE ZhengHui; TIAN XiangJun; SHI ChunXiang
2009-01-01
This study presents a soil moisture assimilation scheme,which could assimilate microwave brightness temperature directly,based on the ensemble Kalman filter and the shuffled complex evolution method (SCE-UA).It uses the soil water model of the land surface model CLM3.0 as the forecast operator,and a radiative transfer model (RTM) as the observation operator in the assimilation system.The assimilation scheme is implemented in two phases:the parameter calibration phase and the pure soil moisture assimilation phase.The vegetation optical thickness and surface roughness parameters in the RTM are calibrated by SCE-UA method and the optimal parameters are used as the final model parameters of the observation operator in the assimilation phase.The ideal experiments with synthetic data indicate that this scheme could significantly improve the simulation of soil moisture at the surface layer.Furthermore,the estimation of soil moisture in the deeper layers could also be improved to a certain extent.The real assimilation experiments with AMSR-E brightness temperature at 10.65 GHz (vertical polarization) show that the root mean square error (RMSE) of soil moisture in the top layer (0-10 cm) by asms.msimilation is 0.03355 m~3·m~(-3),which is reduced by 33.6% compared with that by simulation (0.05052m~3·m~(-3)).The mean RMSE by assimilation for the deeper layers (10-50 cm) is also reduced by 20.9%.All these experiments demonstrate the reasonability of the assimilation scheme developed in this study.
2009-01-01
This study presents a soil moisture assimilation scheme, which could assimilate microwave brightness temperature directly, based on the ensemble Kalman filter and the shuffled complex evolution method (SCE-UA). It uses the soil water model of the land surface model CLM3.0 as the forecast operator, and a radiative transfer model (RTM) as the observation operator in the assimilation system. The assimilation scheme is implemented in two phases: the parameter calibration phase and the pure soil moisture assimilation phase. The vegetation optical thickness and surface roughness parameters in the RTM are calibrated by SCE-UA method and the optimal parameters are used as the final model parameters of the observation operator in the assimilation phase. The ideal experiments with synthetic data indicate that this scheme could significantly improve the simulation of soil moisture at the surface layer. Further- more, the estimation of soil moisture in the deeper layers could also be improved to a certain extent. The real assimilation experiments with AMSR-E brightness temperature at 10.65 GHz (vertical polariza- tion) show that the root mean square error (RMSE) of soil moisture in the top layer (0―10 cm) by as- similation is 0.03355 m3·m-3, which is reduced by 33.6% compared with that by simulation (0.05052 m3·m-3). The mean RMSE by assimilation for the deeper layers (10―50 cm) is also reduced by 20.9%. All these experiments demonstrate the reasonability of the assimilation scheme developed in this study.
Dosio, Alessandro
2016-09-01
The most severe effects of global warning will be related to the frequency and severity of extreme events. We provide an analysis of projections of temperature and related extreme events for Africa based on a large ensemble of Regional Climate Models from the COordinated Regional climate Downscaling EXperiment (CORDEX). Results are presented not only by means of widely used indices but also with a recently developed Heat Wave Magnitude Index-daily (HWMId), which takes into account both heat wave duration and intensity. Results show that under RCP8.5, warming of more than 3.5 °C is projected in JFM over most of the continent, whereas in JAS temperatures over large part of Northern Africa, the Sahara and the Arabian peninsula are projected to increase up to 6 °C. Large increase in in the number of warm days (Tx90p) is found over sub equatorial Africa, with values up to more than 90 % in JAS, and more than 80 % in JFM over e.g., the gulf of Guinea, Central African Republic, South Sudan and Ethiopia. Changes in Tn90p (warm nights) are usually larger, with some models projecting Tn90p reaching 95 % starting from around 2060 even under RCP4.5 over the Gulf of Guinea and the Sahel. Results also show that the total length of heat spells projected to occur normally (i.e. once every 2 years) under RCP8.5 may be longer than those occurring once every 30 years under the lower emission scenario. By employing the recently developed HWMId index, it is possible to investigate the relationship between heat wave length ad intensity; in particular it is shown that very intense heat waves such as that occurring over the Horn of Africa may have values of HWMId larger than that of longer, but relatively weak, heat waves over West Africa.
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold
2017-01-01
To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of mo...
C. L. Keppenne
2005-01-01
Full Text Available To compensate for a poorly known geoid, satellite altimeter data is usually analyzed in terms of anomalies from the time mean record. When such anomalies are assimilated into an ocean model, the bias between the climatologies of the model and data is problematic. An ensemble Kalman filter (EnKF is modified to account for the presence of a forecast-model bias and applied to the assimilation of TOPEX/Poseidon (T/P altimeter data. The online bias correction (OBC algorithm uses the same ensemble of model state vectors to estimate biased-error and unbiased-error covariance matrices. Covariance localization is used but the bias covariances have different localization scales from the unbiased-error covariances, thereby accounting for the fact that the bias in a global ocean model could have much larger spatial scales than the random error.The method is applied to a 27-layer version of the Poseidon global ocean general circulation model with about 30-million state variables. Experiments in which T/P altimeter anomalies are assimilated show that the OBC reduces the RMS observation minus forecast difference for sea-surface height (SSH over a similar EnKF run in which OBC is not used. Independent in situ temperature observations show that the temperature field is also improved. When the T/P data and in situ temperature data are assimilated in the same run and the configuration of the ensemble at the end of the run is used to initialize the ocean component of the GMAO coupled forecast model, seasonal SSH hindcasts made with the coupled model are generally better than those initialized with optimal interpolation of temperature observations without altimeter data. The analysis of the corresponding sea-surface temperature hindcasts is not as conclusive.
Hu, Yue; Hong, Wei; Shi, Yunyu; Liu, Haiyan
2012-10-09
In molecular simulations, accelerated sampling can be achieved efficiently by raising the temperature of a small number of coordinates. For collective coordinates, the temperature-accelerated molecular dynamics method or TAMD has been previously proposed, in which the system is extended by introducing virtual variables that are coupled to these coordinates and simulated at higher temperatures (Maragliano, L.; Vanden-Eijnden, E. Chem. Phys. Lett.2005, 426, 168-175). In such accelerated simulations, steady state or equilibrium distributions may exist but deviate from the canonical Boltzmann one. We show that by assuming adiabatic decoupling between the subsystems simulated at different temperatures, correct canonical distributions and ensemble averages can be obtained through reweighting. The method makes use of the low-dimensional free energy surfaces that are estimated as Gaussian mixture probability densities through maximum likelihood and expectation maximization. Previously, we proposed the amplified collective motion method or ACM. The method employs the coarse-grained elastic network model or ANM to extract collective coordinates for accelerated sampling. Here, we combine the ideas of ACM and of TAMD to develop a general technique that can achieve canonical sampling through reweighting under the adiabatic approximation. To test the validity and accuracy of adiabatic reweighting, first we consider a single n-butane molecule in a canonical stochastic heat bath. Then, we use explicitly solvated alanine dipeptide and GB1 peptide as model systems to demonstrate the proposed approaches. With alanine dipeptide, it is shown that sampling can be accelerated by more than an order of magnitude with TAMD while correct distributions and canonical ensemble averages can be recovered, necessarily through adiabatic reweighting. For the GB1 peptide, the conformational distribution sampled by ACM-TAMD, after adiabatic reweighting, suggested that a normal simulation suffered
A Note on NCOM Temperature Forecast Error Calibration Using the Ensemble Transform
2009-01-01
spatial distributions of the E Coelhn et a\\. ,’journal of Marine Systems 78 (2009) S272-S281 S27S t e IS ii 15 If ii .i s 51 s^ I! i| 1* 2 3...International Conference on Mathematics and Continuum Mechanics. Pub. Centro Internacional de Matematica (C1M). ISBN: 978-989-95011- 2-6, pp. 207-217. Porto...Hagerdorn. R., Palmer, T.N., 2005. The rationale behind the success of multi-model ensembles in seasonal forecasting - II . Calibration and combination
Hofer, Marlis; Marzeion, Ben; Mölg, Thomas
2012-10-01
It is well known from previous research that significant differences exist amongst reanalysis products from different institutions. Here, we compare the skill of NCEP-R (reanalyses by the National Centers for Environmental Prediction, NCEP), ERA-int (the European Centre of Medium-range Weather Forecasts Interim), JCDAS (the Japanese Meteorological Agency Climate Data Assimilation System reanalyses), MERRA (the Modern Era Retrospective-Analysis for Research and Applications by the National Aeronautics and Space Administration), CFSR (the Climate Forecast System Reanalysis by the NCEP), and ensembles thereof as predictors for daily air temperature on a high-altitude glaciated mountain site in Peru. We employ a skill estimation method especially suited for short-term, high-resolution time series. First, the predictors are preprocessed using simple linear regression models calibrated individually for each calendar month. Then, cross-validation under consideration of persistence in the time series is performed. This way, the skill of the reanalyses with focus on intra-seasonal and inter-annual variability is quantified. The most important findings are: (1) ERA-int, CFSR, and MERRA show considerably higher skill than NCEP-R and JCDAS; (2) differences in skill appear especially during dry and intermediate seasons in the Cordillera Blanca; (3) the optimum horizontal scales largely vary between the different reanalyses, and horizontal grid resolutions of the reanalyses are poor indicators of this optimum scale; and (4) using reanalysis ensembles efficiently improves the performance of individual reanalyses.
Najafi, Husain; Massah Bavani, Ali Reza; Wanders, Niko; Wood, Eric; Irannejad, Parviz; Robertson, Andrew
2017-04-01
Water resource managers can utilize reliable seasonal forecasts for allocating water between different users within a water year. In the west of Iran where a decline of renewable water resources has been observed, basin-wide water management has been the subject of many inter-provincial conflicts in recent years. The problem is exacerbated when the environmental water requirements is not provided leaving the Hoor-al-Azim marshland in the downstream dry. It has been argued that information on total seasonal rainfall can support the Iranian Ministry of Energy within the water year. This study explores the skill of the North America Multi Model Ensemble for Karkheh River Basin in the of west Iran. NMME seasonal precipitation and temperature forecasts from eight models are evaluated against PERSIANN-CDR and Climate Research Unit (CRU) datasets. Analysis suggests that anomaly correlation for both precipitation and temperature is greater than 0.4 for all individual models. Lead time-dependent seasonal forecasts are improved when a multi-model ensemble is developed for the river basin using stepwise linear regression model. MME R-squared exceeds 0.6 for temperature for almost all initializations suggesting high skill of NMME in Karkheh river basin. The skill of MME for rainfall forecasts is high for 1-month lead time for October, February, March and October initializations. However, for months when the amount of rainfall accounts for a significant proportion of total annual rainfall, the skill of NMME is limited a month in advance. It is proposed that operational regional water companies incorporate NMME seasonal forecasts into water resource planning and management, especially during growing seasons that are essential for agricultural risk management.
Lorenz, Ruth; Argüeso, Daniel; Donat, Markus G.; Pitman, Andrew J.; Hurk, Bart; Berg, Alexis; Lawrence, David M.; Chéruy, Frédérique; Ducharne, Agnès.; Hagemann, Stefan; Meier, Arndt; Milly, P. C. D.; Seneviratne, Sonia I.
2016-01-01
We examine how soil moisture variability and trends affect the simulation of temperature and precipitation extremes in six global climate models using the experimental protocol of the Global Land-Atmosphere Coupling Experiment of the Coupled Model Intercomparison Project, Phase 5 (GLACE-CMIP5). This protocol enables separate examinations of the influences of soil moisture variability and trends on the intensity, frequency, and duration of climate extremes by the end of the 21st century under a business-as-usual (Representative Concentration Pathway 8.5) emission scenario. Removing soil moisture variability significantly reduces temperature extremes over most continental surfaces, while wet precipitation extremes are enhanced in the tropics. Projected drying trends in soil moisture lead to increases in intensity, frequency, and duration of temperature extremes by the end of the 21st century. Wet precipitation extremes are decreased in the tropics with soil moisture trends in the simulations, while dry extremes are enhanced in some regions, in particular the Mediterranean and Australia. However, the ensemble results mask considerable differences in the soil moisture trends simulated by the six climate models. We find that the large differences between the models in soil moisture trends, which are related to an unknown combination of differences in atmospheric forcing (precipitation, net radiation), flux partitioning at the land surface, and how soil moisture is parameterized, imply considerable uncertainty in future changes in climate extremes.
Erna Apriliani; Dieky Adzkiya; Arief Baihaqi
2011-01-01
Kalman filter is an algorithm to estimate the state variable of dynamical stochastic system. The square root ensemble Kalman filter is an modification of Kalman filter. The square root ensemble Kalman filter is proposed to keep the computational stability and reduce the computational time. In this paper we study the efficiency of the reduced rank ensemble Kalman filter. We apply this algorithm to the non isothermal continue stirred tank reactor problem. We decompose the covariance of the ense...
Kysely, Jan [Institute of Atmospheric Physics AS CR, Prague 4 (Czech Republic); Plavcova, Eva [Institute of Atmospheric Physics AS CR, Prague 4 (Czech Republic); Charles University, Faculty of Mathematics and Physics, Prague (Czech Republic)
2012-09-15
The study examines how regional climate models (RCMs) reproduce the diurnal temperature range (DTR) in their control simulations over Central Europe. We evaluate 30-year runs driven by perfect boundary conditions (the ERA40 reanalysis, 1961-1990) and a global climate model (ECHAM5) of an ensemble of RCMs with 25-km resolution from the ENSEMBLES project. The RCMs' performance is compared against the dataset gridded from a high-density stations network. We find that all RCMs underestimate DTR in all seasons, notwithstanding whether driven by ERA40 or ECHAM5. Underestimation is largest in summer and smallest in winter in most RCMs. The relationship of the models' errors to indices of atmospheric circulation and cloud cover is discussed to reveal possible causes of the biases. In all seasons and all simulations driven by ERA40 and ECHAM5, underestimation of DTR is larger under anticyclonic circulation and becomes smaller or negligible for cyclonic circulation. In summer and transition seasons, underestimation tends to be largest for the southeast to south flow associated with warm advection, while in winter it does not depend on flow direction. We show that the biases in DTR, which seem common to all examined RCMs, are also related to cloud cover simulation. However, there is no general tendency to overestimate total cloud amount under anticyclonic conditions in the RCMs, which suggests the large negative bias in DTR for anticyclonic circulation cannot be explained by a bias in cloudiness. Errors in simulating heat and moisture fluxes between land surface and atmosphere probably contribute to the biases in DTR as well. (orig.)
A. J. Pitman
2012-11-01
Full Text Available The impact of historical land use induced land cover change (LULCC on regional-scale climate extremes is examined using four climate models within the Land Use and Climate, IDentification of robust impacts project. To assess those impacts, multiple indices based on daily maximum and minimum temperatures and daily precipitation were used. We contrast the impact of LULCC on extremes with the impact of an increase in atmospheric CO_{2} from 280 ppmv to 375 ppmv. In general, consistent changes in both high and low temperature extremes are similar to the simulated change in mean temperature caused by LULCC and are restricted to regions of intense modification. The impact of LULCC on both means and on most temperature extremes is statistically significant. While the magnitude of the LULCC-induced change in the extremes can be of similar magnitude to the response to the change in CO_{2}, the impacts of LULCC are much more geographically isolated. For most models, the impacts of LULCC oppose the impact of the increase in CO_{2} except for one model where the CO_{2}-caused changes in the extremes are amplified. While we find some evidence that individual models respond consistently to LULCC in the simulation of changes in rainfall and rainfall extremes, LULCC's role in affecting rainfall is much less clear and less commonly statistically significant, with the exception of a consistent impact over South East Asia. Since the simulated response of mean and extreme temperatures to LULCC is relatively large, we conclude that unless this forcing is included, we risk erroneous conclusions regarding the drivers of temperature changes over regions of intense LULCC.
Pirttioja, N.; Carter, T.R.; Fronzek, S.; Bindi, M.; Hoffmann, H.; Palosuo, T.; Ruiz-Ramos, M.; Tao, F.; Trnka, M.; Acutis, M.; Supit, I.
2015-01-01
This study explored the utility of the impact response surface (IRS) approach for investigating model ensemble crop yield responses under a large range of changes in climate. IRSs of spring and winter wheat Triticum aestivum yields were constructed from a 26-member ensemble of process-based crop sim
Erna Apriliani
2011-01-01
Full Text Available Kalman filter is an algorithm to estimate the state variable of dynamical stochastic system. The square root ensemble Kalman filter is an modification of Kalman filter. The square root ensemble Kalman filter is proposed to keep the computational stability and reduce the computational time. In this paper we study the efficiency of the reduced rank ensemble Kalman filter. We apply this algorithm to the non isothermal continue stirred tank reactor problem. We decompose the covariance of the ensemble estimation by using the singular value decomposition (the SVD, and then we reduced the rank of the diagonal matrix of those singular values. We make a simulation by using Matlab program. We took some the number of ensemble such as 100, 200 and 500. We compared the computational time and the accuracy between the square root ensemble Kalman filter and the ensemble Kalman filter. The reduced rank ensemble Kalman filter can’t be applied in this problem because the dimension of state variable is too less.
Nonclassical Photon Pairs Generated from a Room-temperature Atomic Ensemble
JIANG Wei; HAN Chao; XUE Peng; DUAN L M; GUO G C
2004-01-01
@@ We report experimental generation of non-classically correlated photon pairs from collective emission in a room temperature atomic vapor cell.The nonclassical feature of the emission is demonstrated by observing a violation of the Cauchy-Schwarz inequality.Each pair of correlated photons are separated by a controllable time delay up to 2 microseconds.This experiment demonstrates an important step towards the realization of the Duan-Lukin-Cirac-Zoller scheme for scalable long-distance quantum communication.
Quantum Gas of Polar Molecules Ensembles at Ultralow Temperatures: f-wave Superfluids
Boudjemâa, Abdelâali
2017-07-01
We investigate novel f-wave superfluids of fermionic polar molecules in a two-dimensional bilayer system with dipole moments polarized perpendicular to the layers and in opposite directions in different layers. The solution of the BCS gap equation reveals that these unconventional superfluids emerge at temperatures on the level of femtokelvin which opens up new possibilities to explore the topological f+i f phase, quantum interferometry and Majorana fermions in experiments with ultracold polar molecules. The experimental realization of such interesting novel f-wave pairings is discussed.
Projected changes to winter temperature characteristics over Canada based on an RCM ensemble
Jeong, Dae Il; Sushama, Laxmi; Diro, Gulilat Tefera; Khaliq, M. Naveed
2016-09-01
Cold temperature and associated extremes often impact adversely human health and environment and bring disruptions in economic activities during winter over Canada. This study investigates projected changes in winter (December to March) period cold extreme days (i.e., cold nights, cold days, frost days, and ice days) and cold spells over Canada based on 11 regional climate model (RCM) simulations for the future 2040-2069 period with respect to the current 1970-1999 period. These simulations, available from the North American Regional Climate Change Assessment Program, were obtained with six different RCMs, when driven by four different Atmosphere-Ocean General Circulation Models, under the Special Report on Emissions Scenarios A2 scenario. Based on the reanalysis boundary conditions, the RCM simulations reproduce spatial patterns of observed mean values of the daily minimum and maximum temperatures and inter-annual variability of the number of cold nights over different Canadian climatic regions considered in the study. A comparison of current and future period simulations suggests decreases in the frequency of cold extreme events (i.e., cold nights, cold days and cold spells) and in selected return levels of maximum duration of cold spells over the entire study domain. Important regional differences are noticed as the simulations generally indicate smaller decreases in the characteristics of extreme cold events over western Canada compared to the other regions. The analysis also suggests an increase in the frequency of midwinter freeze-thaw events, due mainly to a decrease in the number of frost days and ice days for all Canadian regions. Especially, densely populated southern and coastal Canadian regions will require in depth studies to facilitate appropriate adaptation strategies as these regions are clearly expected to experience large increases in the frequency of freeze-thaw events.
Brüggemann, R; Nesheva, D; Meier, S; Bineva, I
2011-02-01
Amorphous SiO(x) thin films with three different oxygen contents (x = 1.3, 1.5, and 1.7) have been deposited by thermal evaporation of SiO in vacuum. Partial phase separation in the films has been induced by annealing at 773 or 973 K in argon for 60 and 120 min and thus Si-SiO(x) composite films have been prepared containing amorphous Si nanoparticles of various sizes (Photoluminescence from the films has been measured in the temperature range 20-296 K. The single Gauss band observed in the photoluminescence spectra of the samples with x = 1.3 and centered in the range 1.55-1.75 eV has been related to radiative recombination in Si nanoparticles. Two bands, a red-orange one (related to radiative recombination in Si nanoparticles) and a green band peaked at approximately 2.3 eV (related to radiative recombination via defects) have been resolved in the photoluminescence spectra of the films with x = 1.5 and 1.7. The band in the spectra of the x = 1.3 samples has shown a relative strong thermal quenching but it is significantly weaker than the photoluminescence quenching in bulk a-Si. Besides, the higher the initial oxygen content, the weaker is the photoluminescence thermal quenching. These observations have been related to carrier confinement which is stronger in smaller nanoparticles. The thermally induced photoluminescence decrease with increasing temperature in the samples with x = 1.3 obeys the relation that is characteristic for bulk a-Si:H while the photoluminescence decrease in x = 1.5 and 1.7 samples is of Arrhenius type. We suggest that in nanoparticles larger than 2 nm recombination via band tail states is the dominating photoluminescence mechanism while in smaller nanoparticles exciton-like recombination dominates.
Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent
2015-09-14
A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.
Dadgostar, S.; Mogilatenko, A.; Masselink, W. T.; Hatami, F. [Department of Physics, Humboldt-Universität zu Berlin, Newton-Str. 15, D-12489 Berlin (Germany); Schmidtbauer, J.; Boeck, T. [Leibniz-Institut für Kristallzüchtung, Max-Born-Str. 2, D-12489 Berlin (Germany); Torres, A.; Martínez, O.; Jiménez, J. [Departamento de Fisica de la Materia Condensada, E.T.S.I.I., 47011 Valladolid GdSOptronlab, Dpto. Física de la Materia Condensada, Universidad de Valladolid, Ed. I+D, Paseo de Belén, 11, 47011, Valladolid (Spain); Tomm, J. W. [Max-Born-Institut für Nichtlineare Optik und Kurzzeitspektroskopie, Max-Born-Str. 2A, 12489 Berlin (Germany)
2016-03-07
We describe the optical emission and the carrier dynamics of an ensemble of self-assembled GaAs quantum dots embedded in GaP(001). The QD formation is driven by the 3.6% lattice mismatch between GaAs and GaP in the Stranski-Krastanow mode after deposition of more than 1.2 monolayers of GaAs. The quantum dots have an areal density between 6 and 7.6 × 10{sup 10} per cm{sup −2} and multimodal size distribution. The luminescence spectra show two peaks in the range of 1.7 and 2.1 eV. The samples with larger quantum dots have red emission and show less thermal quenching compared with the samples with smaller QDs. The large QDs luminescence up to room temperature. We attribute the high energy emission to indirect carrier recombination in the thin quantum wells or small strained quantum dots, whereas the low energy red emission is due to the direct electron-hole recombination in the relaxed quantum dots.
Projected changes to high temperature events for Canada based on a regional climate model ensemble
Jeong, Dae Il; Sushama, Laxmi; Diro, Gulilat Tefera; Khaliq, M. Naveed; Beltrami, Hugo; Caya, Daniel
2016-05-01
Extreme hot spells can have significant impacts on human society and ecosystems, and therefore it is important to assess how these extreme events will evolve in a changing climate. In this study, the impact of climate change on hot days, hot spells, and heat waves, over 10 climatic regions covering Canada, based on 11 regional climate model (RCM) simulations from the North American Regional Climate Change Assessment Program for the June to August summer period is presented. These simulations were produced with six RCMs driven by four Atmosphere-Ocean General Circulation Models (AOGCM), for the A2 emission scenario, for the current 1970-1999 and future 2040-2069 periods. Two types of hot days, namely HD-1 and HD-2, defined respectively as days with only daily maximum temperature (Tmax) and both Tmax and daily minimum temperature (Tmin) exceeding their respective thresholds (i.e., period-of-record 90th percentile of Tmax and Tmin values), are considered in the study. Analogous to these hot days, two types of hot spells, namely HS-1 and HS-2, are identified as spells of consecutive HD-1 and HD-2 type hot days. In the study, heat waves are defined as periods of three or more consecutive days, with Tmax above 32 °C threshold. Results suggest future increases in the number of both types of hot days and hot spell events for the 10 climatic regions considered. However, the projected changes show high spatial variability and are highly dependent on the RCM and driving AOGCM combination. Extreme hot spell events such as HS-2 type hot spells of longer duration are expected to experience relatively larger increases compared to hot spells of moderate duration, implying considerable heat related environmental and health risks. Regionally, the Great Lakes, West Coast, Northern Plains, and Maritimes regions are found to be more affected due to increases in the frequency and severity of hot spells and/or heat wave characteristics, requiring more in depth studies for these regions
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
2002-01-01
NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx
Zenkevich, Eduard I.; Stupak, Alexander P.; Kowerko, Danny; Borczyskowski, Christian von
2012-10-01
Optical spectroscopy on ensembles and single CdSe/ZnS semiconductor quantum dots (QDs) demonstrates a competition of trap and near band edge photoluminescence (PL). This competition can be markedly influenced by a few surface attached pyridyl functionalized dye molecules (porphyrins or perylene diimides) forming nanoassemblies with well defined geometries. Temperature variation and related changes in absorption and emission reveal sharp changes of the ligand shell structure in a narrow temperature range for organic (TOPO and amine) surfactants (phase transition). The effects on QD PL at this transition become considerably pronounced upon attachment of only a few dye molecules to QD surface. Moreover, under ambient conditions amine capped QDs are photodegraded in the course of time. This process is enhanced by attached dye molecules both on the ensemble and single particle/dye level. This investigation elaborates the importance of (switchable) surface states for the characterization of the PL of QDs.
I. Diallo
2012-01-01
Full Text Available Reliable climate change scenarios are critical for West Africa, whose economy relies mostly on agriculture and, in this regard, multimodel ensembles are believed to provide the most robust climate change information. Toward this end, we analyze and intercompare the performance of a set of four regional climate models (RCMs driven by two global climate models (GCMs (for a total of 4 different GCM-RCM pairs in simulating present day and future climate over West Africa. The results show that the individual RCM members as well as their ensemble employing the same driving fields exhibit different biases and show mixed results in terms of outperforming the GCM simulation of seasonal temperature and precipitation, indicating a substantial sensitivity of RCMs to regional and local processes. These biases are reduced and GCM simulations improved upon by averaging all four RCM simulations, suggesting that multi-model RCM ensembles based on different driving GCMs help to compensate systematic errors from both the nested and the driving models. This confirms the importance of the multi-model approach for improving robustness of climate change projections. Illustrative examples of such ensemble reveal that the western Sahel undergoes substantial drying in future climate projections mostly due to a decrease in peak monsoon rainfall.
Zenkevich, Eduard I., E-mail: zenkev@tut.by [National Technical University of Belarus, Department of Information Technologies and Robotics, Nezavisimosti Ave., 65, Minsk 220013 (Belarus); Stupak, Alexander P. [B.I. Stepanov Institute of Physics, National Academy of Science of Belarus, Nezavisimosti Ave., 70, 220072 Minsk (Belarus); Kowerko, Danny; Borczyskowski, Christian von [Institute of Physics and Center for Nanostructured Materials and Analytics (nanoMA), Chemnitz University of Technology, 09107 Chemnitz (Germany)
2012-10-08
Highlights: Black-Right-Pointing-Pointer Ensemble and single assembly optical experiments for CdSe/ZnS QD-dye nanocomposites. Black-Right-Pointing-Pointer Temperature lowering or dye attachment leads to a phase transition of capping layer. Black-Right-Pointing-Pointer It changes the distribution and energy of surface traps and QD band edge emission. Black-Right-Pointing-Pointer QD photodegradation in the course of time is enlarged by attached dye molecules. Black-Right-Pointing-Pointer Phase transition has impact on QD core structure and exciton-phonon coupling. -- Abstract: Optical spectroscopy on ensembles and single CdSe/ZnS semiconductor quantum dots (QDs) demonstrates a competition of trap and near band edge photoluminescence (PL). This competition can be markedly influenced by a few surface attached pyridyl functionalized dye molecules (porphyrins or perylene diimides) forming nanoassemblies with well defined geometries. Temperature variation and related changes in absorption and emission reveal sharp changes of the ligand shell structure in a narrow temperature range for organic (TOPO and amine) surfactants (phase transition). The effects on QD PL at this transition become considerably pronounced upon attachment of only a few dye molecules to QD surface. Moreover, under ambient conditions amine capped QDs are photodegraded in the course of time. This process is enhanced by attached dye molecules both on the ensemble and single particle/dye level. This investigation elaborates the importance of (switchable) surface states for the characterization of the PL of QDs.
Margot Bador
2015-09-01
Full Text Available Reducing the dimensionality of the complex spatio-temporal variables associated with climate modeling, especially ensembles of climate models, is a challenging and important objective. For studies of detection and attribution, it is especially important to maintain information related to the extreme values of the atmospheric processes. Typical methods for data reduction involve summarizing climate model output information through means and variances, which does not preserve any information about the extremes. In order to help solve this challenge, a dependence summary measure appropriate for extreme values must be inferred. Here, we adapt one such measure from a recent study to a larger domain with a different variable and gridded data from observations and climate model ensembles, i.e. E-OBS observations and the CNRM-CM5 model. The handling of such ensembles of data is proposed, as well as a comparison of the spatial clusterings between two different ensembles, here a present-day and a future ensemble of climate simulations. This method yields valid information concerning extremes, while greatly reducing the data set.
Takuji Waseda
2013-06-01
Full Text Available We develop an assimilation method of high horizontal resolution sea surface temperature data, provided from the Moderate Resolution Imaging Spectroradiometer (MODIS-SST sensors boarded on the Aqua and Terra satellites operated by National Aeronautics and Space Administration (NASA, focusing on the reproducibility of the Kuroshio front variations south of Japan in February 2010. Major concerns associated with the development are (1 negative temperature bias due to the cloud effects, and (2 the representation of error covariance for detection of highly variable phenomena. We treat them by utilizing an advanced data assimilation method allowing use of spatiotemporally varying error covariance: the Local Ensemble Transformation Kalman Filter (LETKF. It is found that the quality control, by comparing the model forecast variable with the MODIS-SST data, is useful to remove the negative temperature bias and results in the mean negative bias within −0.4 °C. The additional assimilation of MODIS-SST enhances spatial variability of analysis SST over 50 km to 25 km scales. The ensemble spread variance is effectively utilized for excluding the erroneous temperature data from the assimilation process.
Low-temperature tapered-fiber probing of diamond NV ensembles coupled to GaP microcavities
Fu, K -M C; Santori, C; Faraon, A; Beausoleil, R G
2011-01-01
In this work we present a platform for testing the device performance of a cavity-emitter system, using an ensemble of emitters and a tapered optical fiber. This method provides high-contrast spectra of the cavity modes, selective detection of emitters coupled to the cavity, and an estimate of the device performance in the single- emitter case. Using nitrogen-vacancy (NV) centers in diamond and a GaP optical microcavity, we are able to tune the cavity onto the NV resonance at 10 K, couple the cavity-coupled emission to a tapered fiber, and measure the fiber-coupled NV spontaneous emission decay. Theoretically we show that the fiber-coupled average Purcell factor is 2-3 times greater than that of free-space collection; although due to ensemble averaging it is still a factor of 3 less than the Purcell factor of a single, ideally placed center.
Carmelo, J. M. P.; Prosen, T.
2017-01-01
Whether in the thermodynamic limit, vanishing magnetic field h → 0, and nonzero temperature the spin stiffness of the spin-1/2 XXX Heisenberg chain is finite or vanishes within the grand-canonical ensemble remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we provide an upper bound on the stiffness and show that within that ensemble it vanishes for h → 0 in the thermodynamic limit of chain length L → ∞, at high temperatures T → ∞. Our approach uses a representation in terms of the L physical spins 1/2. For all configurations that generate the exact spin-S energy and momentum eigenstates such a configuration involves a number 2S of unpaired spins 1/2 in multiplet configurations and L - 2 S spins 1/2 that are paired within Msp = L / 2 - S spin-singlet pairs. The Bethe-ansatz strings of length n = 1 and n > 1 describe a single unbound spin-singlet pair and a configuration within which n pairs are bound, respectively. In the case of n > 1 pairs this holds both for ideal and deformed strings associated with n complex rapidities with the same real part. The use of such a spin 1/2 representation provides useful physical information on the problem under investigation in contrast to often less controllable numerical studies. Our results provide strong evidence for the absence of ballistic transport in the spin-1/2 XXX Heisenberg chain in the thermodynamic limit, for high temperatures T → ∞, vanishing magnetic field h → 0 and within the grand-canonical ensemble.
J.M.P. Carmelo
2017-01-01
Full Text Available Whether in the thermodynamic limit, vanishing magnetic field h→0, and nonzero temperature the spin stiffness of the spin-1/2 XXX Heisenberg chain is finite or vanishes within the grand-canonical ensemble remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we provide an upper bound on the stiffness and show that within that ensemble it vanishes for h→0 in the thermodynamic limit of chain length L→∞, at high temperatures T→∞. Our approach uses a representation in terms of the L physical spins 1/2. For all configurations that generate the exact spin-S energy and momentum eigenstates such a configuration involves a number 2S of unpaired spins 1/2 in multiplet configurations and L−2S spins 1/2 that are paired within Msp=L/2−S spin–singlet pairs. The Bethe-ansatz strings of length n=1 and n>1 describe a single unbound spin–singlet pair and a configuration within which n pairs are bound, respectively. In the case of n>1 pairs this holds both for ideal and deformed strings associated with n complex rapidities with the same real part. The use of such a spin 1/2 representation provides useful physical information on the problem under investigation in contrast to often less controllable numerical studies. Our results provide strong evidence for the absence of ballistic transport in the spin-1/2 XXX Heisenberg chain in the thermodynamic limit, for high temperatures T→∞, vanishing magnetic field h→0 and within the grand-canonical ensemble.
Diurnal Ensemble Surface Meteorology Statistics
U.S. Environmental Protection Agency — Excel file containing diurnal ensemble statistics of 2-m temperature, 2-m mixing ratio and 10-m wind speed. This Excel file contains figures for Figure 2 in the...
Tapiador, Francisco J.; Sanchez, Enrique; Romera, Raquel (Inst. of Environmental Sciences, Univ. of Castilla-La Mancha (UCLM), 45071 Toledo (Spain)). e-mail: francisco.tapiador@uclm.es
2009-07-01
Regional climate models (RCMs) are dynamical downscaling tools aimed to improve the modelling of local physical processes. Ensembles of RCMs are widely used to improve the coarse-grain estimates of global climate models (GCMs) since the use of several RCMs helps to palliate uncertainties arising from different dynamical cores and numerical schemes methods. In this paper, we analyse the differences and similarities in the climate change response for an ensemble of heterogeneous RCMs forced by one GCM (HadAM3H), and one emissions scenario (IPCC's SRES-A2 scenario). As a difference with previous approaches using PRUDENCE database, the statistical description of climate characteristics is made through the spatial and temporal aggregation of the RCMs outputs into probability distribution functions (PDF) of monthly values. This procedure is a complementary approach to conventional seasonal analyses. Our results provide new, stronger evidence on expected marked regional differences in Europe in the A2 scenario in terms of precipitation and temperature changes. While we found an overall increase in the mean temperature and extreme values, we also found mixed regional differences for precipitation
Aken, Bronwen L.; Achuthan, Premanand; Akanni, Wasiu; Amode, M. Ridwan; Bernsdorff, Friederike; Bhai, Jyothish; Billis, Konstantinos; Carvalho-Silva, Denise; Cummins, Carla; Clapham, Peter; Gil, Laurent; Girón, Carlos García; Gordon, Leo; Hourlier, Thibaut; Hunt, Sarah E.; Janacek, Sophie H.; Juettemann, Thomas; Keenan, Stephen; Laird, Matthew R.; Lavidas, Ilias; Maurel, Thomas; McLaren, William; Moore, Benjamin; Murphy, Daniel N.; Nag, Rishi; Newman, Victoria; Nuhn, Michael; Ong, Chuang Kee; Parker, Anne; Patricio, Mateus; Riat, Harpreet Singh; Sheppard, Daniel; Sparrow, Helen; Taylor, Kieron; Thormann, Anja; Vullo, Alessandro; Walts, Brandon; Wilder, Steven P.; Zadissa, Amonida; Kostadima, Myrto; Martin, Fergal J.; Muffato, Matthieu; Perry, Emily; Ruffier, Magali; Staines, Daniel M.; Trevanion, Stephen J.; Cunningham, Fiona; Yates, Andrew; Zerbino, Daniel R.; Flicek, Paul
2017-01-01
Ensembl (www.ensembl.org) is a database and genome browser for enabling research on vertebrate genomes. We import, analyse, curate and integrate a diverse collection of large-scale reference data to create a more comprehensive view of genome biology than would be possible from any individual dataset. Our extensive data resources include evidence-based gene and regulatory region annotation, genome variation and gene trees. An accompanying suite of tools, infrastructure and programmatic access methods ensure uniform data analysis and distribution for all supported species. Together, these provide a comprehensive solution for large-scale and targeted genomics applications alike. Among many other developments over the past year, we have improved our resources for gene regulation and comparative genomics, and added CRISPR/Cas9 target sites. We released new browser functionality and tools, including improved filtering and prioritization of genome variation, Manhattan plot visualization for linkage disequilibrium and eQTL data, and an ontology search for phenotypes, traits and disease. We have also enhanced data discovery and access with a track hub registry and a selection of new REST end points. All Ensembl data are freely released to the scientific community and our source code is available via the open source Apache 2.0 license. PMID:27899575
Ferraro, Angus J.; Griffiths, Hannah G.
2016-03-01
The reduction in global-mean precipitation when stratospheric aerosol geoengineering is used to counterbalance global warming from increasing carbon dioxide (CO2) concentrations has been mainly attributed to the temperature-independent effect of CO2 on atmospheric radiative cooling. We demonstrate here that stratospheric sulphate aerosol itself also acts to reduce global-mean precipitation independent of its effects on temperature. The temperature-independent effect of stratospheric aerosol geoenginering on global-mean precipitation is calculated by removing temperature-dependent effects from climate model simulations of the Geoengineering Model Intercomparison Project (GeoMIP). When sulphate aerosol is injected into the stratosphere at a rate of 5 Tg SO2 per year the aerosol reduces global-mean precipitation by approximately 0.2 %, though multiple ensemble members are required to separate this effect from internal variability. For comparison, the precipitation reduction from the temperature-independent effect of increasing CO2 concentrations under the RCP4.5 scenario of the future is approximately 0.5 %. The temperature-independent effect of stratospheric sulphate aerosol arises from the aerosol’s effect on tropospheric radiative cooling. Radiative transfer calculations show this is mainly due to increasing downward emission of infrared radiation by the aerosol, but there is also a contribution from the stratospheric warming the aerosol causes. Our results suggest climate model simulations of solar dimming can capture the main features of the global-mean precipitation response to stratospheric aerosol geoengineering.
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Kadoura, Ahmad Salim
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.
Effects of Work Rate and Temperature on Work/Rest Cycles When Wearing the Chemical Defense Ensemble
Wilmore, Jack
1997-01-01
Phase I was comprised of three study periods, Phase IB and Phase IC, where subjects walked on a treadmill at ambient temperatures of 70 degrees Farenheit, 80 degrees Farenheit, 90 degrees Farenheit...
Composed ensembles of random unitary ensembles
Pozniak, M; Kus, M; Pozniak, Marcin; Zyczkowski, Karol; Kus, Marek
1997-01-01
Composed ensembles of random unitary matrices are defined via products of matrices, each pertaining to a given canonical circular ensemble of Dyson. We investigate statistical properties of spectra of some composed ensembles and demonstrate their physical relevance. We discuss also the methods of generating random matrices distributed according to invariant Haar measure on the orthogonal and unitary group.
Xu, T.; Bateni, S. M.; Liu, S.
2015-12-01
Accurate estimation of turbulent heat fluxes is important for water resources planning and management, irrigation scheduling, and weather forecast. Land surface models (LSMs) can be used to simulate turbulent heat fluxes over large-scale domains. However, the application of LSMs is hindered due to the high uncertainty in model parameters and state variables. In this study, a dual-pass ensemble-based data assimilation (DA) approach is developed to estimate turbulent heat fluxes. Initially, the common land model (CoLM) is used as the LSM (open-loop), and thereafter the ensemble Kalman filter is employed to optimize the CoLM parameters and variables. The first pass of the DA scheme optimizes vegetation parameters of CoLM (which are related to the leaf stomatal conductance) on a weekly-basis by assimilating the MODIS land surface temperature (LST) data. The second pass optimizes the soil moisture state of CoLM on a daily-basis by assimilating soil moisture observations from Cosmic-ray instrument. The ultimate goal is to improve turbulent heat fluxes estimates from CoLM by optimizing its vegetation parameters and soil moisture state via assimilation of LST and soil moisture data into the proposed DA system. The DA approach is tested over a wet and densely vegetated site, called Daman in northwest of China. Results indicate that the CoLM (open-loop) model typically underestimates latent heat flux and overestimates sensible heat flux. By assimilation of LST in the first pass, the turbulent heat fluxes are improved compared to those of the open-loop. These fluxes become even more accurate by assimilation of soil moisture in the second pass of the DA approach. These findings illustrate that the introduced DA approach can successfully extract information in LST and soil moisture data to optimize the CoLM parameters and states and improve the turbulent heat fluxes estimates.
Huang, Yong; Wang, Fengyou; Li, Yi; Cai, Tijiu
2014-11-01
This paper evaluates the performance of the Coupled Model Intercomparison Project phase 5 (CMIP5) in simulating annual and decadal temperature in the Mekong River Basin from 1950 to 2005. By use of Bayesian multi-model averaging method, the future projection of temperature variation under different scenarios are also analyzed. The results show, the performances of climate model are more accurate in space than time, the model can catch the warming characteristics in the Mekong river Basin, but the accuracy of simulation is not good enough. Bayesian multi-model averaging method can improve the annual and decadal temperature simulation when compared to a single result. The projected temperature in Mekong River will increase by 0.88 °C/100 year, 2.15 °C/100 year and 4.96 °C/100 year for the RCP2.6, RCP4.5, and RCP8.5 scenarios, respectively, over the twenty-first century. The findings will be beneficial for local people and policy-maker to formulate regional strategies against the potential menaces of warming scenarios.
Ozturk, Tugba; Tufan Turp, M.; Türkeş, Murat; Kurnaz, M. Levent
2014-05-01
In this study, we conducted a multi-model ensemble mean approach in order to investigate the projected changes in surface air temperatures and precipitation totals over Central Asia. Even though there are totally sixty seven different models of thirty modeling groups all around the world participating in the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP5), forty four models among them were used due to data availability. Central Asia (known as Region 8), which is one of twelve domains of the Coordinated Regional Climate Downscaling Experiment (CORDEX), was chosen as a domain of study. In this respect, we focused on two distinct scenarios (i.e. RCP4.5 and RCP8.5) for three different future periods (i.e. 2010-2040, 2040 - 2070 and 2070 - 2100) to examine accurately the foreseen changes in two fundamental climate variables (surface air temperature and precipitation total) for the Central Asia region. This work has been supported by Bogazici University BAP under project number 7362. One of the authors (MLK) was partially supported by Mercator-IPC Fellowship Program.
El Kenawy, Ahmed M.
2014-12-18
We employ a suite of regional climate models (RCMs) to assess future changes in summer (JJA) maximum temperature (Tmax) over the Ebro basin, the largest hydrological division in the Iberian Peninsula. Under the A1B emission scenario, future changes in both mean values and their corresponding time varying percentiles were examined by comparing the control period (1971-2000) with two future time slices: 2021-2050 and 2071-2100. Here, the rationale is to assess how lower/upper tails of temperature distributions will change in the future and whether these changes will be consistent with those of the mean. The model validation results demonstrate significant differences among the models in terms of their capability to representing the statistical characteristics (e.g., mean, skewness and asymmetry) of the observed climate. The results also indicate that the current substantial warming observed in the Ebro basin is expected to continue during the 21st century, with more intense warming occurring at higher altitudes and in areas with greater distance from coastlines. All models suggest that the region will experience significant positive changes in both the cold and warm tails of temperature distributions. However, the results emphasize that future changes in the lower and upper tails of the summer Tmax distribution may not follow the same warming rate as the mean condition. In particular, the projected changes in the warm tail of the summer Tmax are shown to be significantly larger than changes in both mean values and the cold tail, especially at the end of the 21st century. The finding suggests that much of the changes in the summer Tmax percentiles will be driven by a shift in the entire distribution of temperature rather than only changes in the central tendency. Better understanding of the possible implications of future climate systems provides information useful for vulnerability assessments and the development of local adaptation strategies for multi
Balawender, Robert
2009-01-01
The formalism developed in the first paper of the series [arXiv:0901.1060v3] is applied to two thermodynamic systems: (i) of three global observables (the energy, the total electron number and the spin number), (ii) of one global observable (the internal electron energy) and two local (position-dependent) observables (the total electron density and the spin density). The two-component potential of the many-electron system of interest is constructed of a scalar external potential and a collinear magnetic field (coupled only with the spin operator). Various equilibrium characteristics of two systems are defined and investigated. Conditions for the equivalence between two systems (the same equilibrium density matrix demanded) are derived and thoroughly discussed. The applicability of the Hohenberg-Kohn theorem is extended to the thermodynamic spin-density functional theory. Obtained results provide a rigorous mathematical foundation for future derivation of the zero-temperature limit of this theory and determina...
XU Chong-Hai; XU Ying
2012-01-01
Climate changes in 21st century China are described based on the projections of 11 climate models under Representative Concentration Pathway (RCP) scenarios. The results show that warming is expected in all regions of China under the RCP scenarios, with the northern regions showing greater warming than the southern regions. The warming tendency from 2011 to 2100 is 0.06°C/10 a for RCP2.6, 0.24°C/10 a for RCP4.5, and 0.63°C/10 a for RCP8.5. The projected time series of annual temperature have similar variation tendencies as the new greenhouse gas (GHG) emission scenario pathways, and the warming under the lower emission scenarios is less than under the higher emission scenarios. The regional averaged precipitation will increase, and the increasing precipitation in the northern regions is significant and greater than in the southern regions in China. It is noted that precipitation will tend to decrease in the southern parts of China during the period of 2011–2040, especially under RCP8.5. Compared with the changes over the globe and some previous projections, the increased warming and precipitation over China is more remarkable under the higher emission scenarios. The uncertainties in the projection are unavoidable, and further analyses are necessary to develop a better understanding of the future changes over the region.
Exploring ensemble visualization
Phadke, Madhura N.; Pinto, Lifford; Alabi, Oluwafemi; Harter, Jonathan; Taylor, Russell M., II; Wu, Xunlei; Petersen, Hannah; Bass, Steffen A.; Healey, Christopher G.
2012-01-01
An ensemble is a collection of related datasets. Each dataset, or member, of an ensemble is normally large, multidimensional, and spatio-temporal. Ensembles are used extensively by scientists and mathematicians, for example, by executing a simulation repeatedly with slightly different input parameters and saving the results in an ensemble to see how parameter choices affect the simulation. To draw inferences from an ensemble, scientists need to compare data both within and between ensemble members. We propose two techniques to support ensemble exploration and comparison: a pairwise sequential animation method that visualizes locally neighboring members simultaneously, and a screen door tinting method that visualizes subsets of members using screen space subdivision. We demonstrate the capabilities of both techniques, first using synthetic data, then with simulation data of heavy ion collisions in high-energy physics. Results show that both techniques are capable of supporting meaningful comparisons of ensemble data.
Forsythe, Nathan; Fowler, Hayley; Pritchard, David
2017-04-01
High mountain Asia (HMA), including the Hindu Kush-Karakoram, Himalayas and Tibetan Plateau, constitutes one the key "water towers of the world", giving rise to river basins whose resources support hundreds of millions of people. This area is currently experiencing substantial demographic growth and socio-economic development. This evolution will likely continue for the next few decades and compound pressure on resource managements systems from inevitable climate change. In order to develop climate services to support water resources planning and facilitate adaptive capacity building, it is essential to critically characterise the skill and biases of the evaluation (reanalysis-driven) and control (historical period) components of presently available regional climate model (RCM) experiments. For mountain regions in particular, the ability of RCMs to reasonably reproduce the influence of complex topography, through lapse rates and orographic forcing, on sub-regional climate - notably temperature and precipitation - must be assessed in detail. This is vital because the spatiotemporal distribution of precipitation and temperature in mountains determine the seasonality of streamflow from the headwater reaches and of major river basins. Once the biases of individual GCM/RCM experiments have been identified methodologies can be developed for modulating (correcting) the projected patterns of change identified by comparing simulated climate sub-regional climate under specific emissions scenarios (e.g. RCP8.5) to historical representations by the same model (time-slice approach). Such methods could for example include calculating temperature change factors as a function elevation difference from present 0°C (freezing) isotherm rather than simply using the overlying RCM grid cell if for instance the RCM showed exacerbated temperature increase at snow line (i.e. albedo feedback in elevation dependent warming) but also showed a pronounced bias in the historical (vertical
Making Tree Ensembles Interpretable
Hara, Satoshi; Hayashi, Kohei
2016-01-01
Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited. In this paper, we propose a post processing method that improves the model interpretability of tree ensembles. After learning a complex tree ensembles in a standard way, we approximate it by a simpler model that is interpretable for human. To obtain the simpler model, we derive the EM algorithm minimizing the KL divergence from the ...
Ensemble treatments of thermal pairing in nuclei
Hung, Nguyen Quang; Dang, Nguyen Dinh
2009-10-01
A systematic comparison is conducted for pairing properties of finite systems at nonzero temperature as predicted by the exact solutions of the pairing problem embedded in three principal statistical ensembles, namely the grandcanonical ensemble, canonical ensemble and microcanonical ensemble, as well as the unprojected (FTBCS1+SCQRPA) and Lipkin-Nogami projected (FTLN1+SCQRPA) theories that include the quasiparticle number fluctuation and coupling to pair vibrations within the self-consistent quasiparticle random-phase approximation. The numerical calculations are performed for the pairing gap, total energy, heat capacity, entropy, and microcanonical temperature within the doubly-folded equidistant multilevel pairing model. The FTLN1+SCQRPA predictions are found to agree best with the exact grand-canonical results. In general, all approaches clearly show that the superfluid-normal phase transition is smoothed out in finite systems. A novel formula is suggested for extracting the empirical pairing gap in reasonable agreement with the exact canonical results.
王敏; 李晓莉; 范广洲; 李泽椿
2012-01-01
采用非齐次高斯回归(NGR)技术对国家气象中心区域集合预报系统的2 m温度预报结果开展了一阶偏差和二阶离散度的校准研究.对预报结果比较详尽的检验分析表明:校准后的2 m温度预报可靠性和预报技巧均显著提高,表现为校准后集合预报成员的均方根误差与离散度更为接近；原Talagrand直方图中的“L”形分布现象得到有效改善；Brier评分、最小连续分级概率评分(CRPS)明显减小,相对作用特征(ROC)面积增大,说明校准后的2 m温度预报表现出更好的预报技能.此外,NGR技术与自适应误差订正技术的对比试验表明,NGR在消除集合平均偏差和提高集合离散度两个方面均有优势.%It's known that ensemble forecasts provide a flow-dependent sample of the probability distribution of possible future atmospheric states instead of the single and deterministic prediction. Ideally, the probability of any event could be skillfully estimated directly from the relative event frequency in the ensemble. Unfortunately, although the quality of ensemble prediction systems (EPS) has been improved greatly, the direct output of EPS generally is subject to the systematic deficiencies, especially for surface variables. They are under-dispersive and lack of reliability. Therefore, statistical post-processing methods have been developed to improve direct model output. The nonhomogeneous Gaussian regression (NGR) is used to calibrate 2 m temperature forecast of the regional EPS at NMC/CMA. The NGR is the statistical correction method with the first and the second moment (mean bias and dispersion) for Gaussian-distributed continuous variable. This method is based on the multiple linear regression technique and provides a predictive probability density function (PDF) in terms of the normal distribution. The method of minimum continuous ranked probability score (CRPS) estimation is used to fit the regression coefficients of PDF. It can be
Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique
Jan D. Keller
2008-12-01
Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.
Multilevel ensemble Kalman filter
Chernov, Alexey
2016-01-06
This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF). In terms of computational cost vs. approximation error the asymptotic performance of the multilevel ensemble Kalman filter (MLEnKF) is superior to the EnKF s.
The Ensembl REST API: Ensembl Data for Any Language.
Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul
2015-01-01
We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.
刘建国; 谢正辉; 赵琳娜; 贾炳浩
2013-01-01
利用TIGGE (THORPEX Interactive Grand Global Ensemble)单中心集合预报系统(ECMWF、United Kingdom Meteorological Office、China Meteorological Administration和NCEP)以及由此所构成的多中心模式超级集合预报系统24小时地面日均气温预报,结合淮河流域地面观测率定贝叶斯模型平均(Bayesian model averaging,BMA)参数,从而建立地面日均气温BMA概率预报模型.由此针对淮河流域进行地面日均气温BMA概率预报及其检验与评估,结果表明BMA模型比原始集合预报效果好；单中心的BMA概率预报都有较好的预报效果,其中ECMWF最好.多中心模式超级集合比单中心BMA概率预报效果更好,采用可替换原则比普通的多中心模式超级集合BMA模型计算量小,且在上述BMA集合预报系统中效果最好.它与原始集合预报相比其平均绝对误差减少近7％,其连续等级概率评分提高近10％.基于采用可替换原则的多中心模式超级集合BMA概率预报,针对研究区域提出了极端高温预警方案,这对防范高温天气有着重要意义.%Bayesian model averaging (BMA) probability forecast models were established through calibration of their parameters using 24-h ensemble forecasts of average daily surface air temperature provided by single-center ensemble prediction systems (EPSs) from the following agencies: the European Centre for Medium-Range Weather Forecasts (ECMWF), the United Kingdom Meteorological Office (UKMO), the China Meteorological Administration (CMA), and the United States National Center for Environmental Prediction (NCEP) and its multi-center model grand-ensemble (GE) EPSs in the THORPEX Interactive Grand Global Ensemble (TIGGE), and observations in the Huaihe basin. The BMA probability forecasts of average daily surface air temperature for different EPSs were assessed by comparison with observations in the Huaihe basin. The results suggest that performance was better in the BMA predictive models than
Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing
Toye, Habib
2017-05-26
We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.
Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing
Toye, Habib; Zhan, Peng; Gopalakrishnan, Ganesh; Kartadikaria, Aditya R.; Huang, Huang; Knio, Omar; Hoteit, Ibrahim
2017-07-01
We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.
Oza, Nikunj C.
2004-01-01
Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.
National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
张玲; 智协飞
2013-01-01
Based on the daily observations of surface temperature and 24 h accumulated precipitation for the period 1952 through 2008,the features of an extreme weather events with low temperature and icing conditions which occurred in the southem part of China during early 2008 have been investigated in this study.In addition,the multimodel ensemble forecasting experiments have been conducted by using the ensemble forecasts of ECMWF,JMA,NCEP and CMA taken from the TIGGE (Interactive Grand Global Ensemble) archives.The results show that more than a third of the stations in the southern part of China were covered by extremely abundant precipitation with a 50-year return period,and extremely low temperature with a 50-year return period occurred in Guizhou and western Hunan province as well.For the 24 ～ 216 h surface temperature forecasts,the bias-removed multimodel ensemble mean (EMN) with running training period (R-BREM) has the highest forecast skill among all individual models and multimodel ensemble techniques.Taking the RMSEs of the ECMWF 96 h forecasts as the criterion,the valid time of the surface temperature forecast may be prolonged to 192 h over the southeastern coast of China by using the R-BREM technique.For the sprinkle forecasts over central and southern China,the R-BREM technique has the best performance in terms of TS scores for the 24 ～ 192 h forecasts (except for the 72 h forecasts) among all individual models and multimodel ensemble techniques.For the moderate rain,the forecast skill of the R-BREM technique is superior over those of individual models and multimodel ensemble mean.%利用逐日平均气温和24小时累积降水量资料研究了2008年初中国南方低温雨雪极端天气事件的特征.还利用TIGGE(THORPEX Interactive Grand Global Ensemble)资料中欧洲中期天气预报中心(ECMWF)、日本气象厅(JMA)、美国国家环境预报中心(NCEP)以及中国气象局(CMA)提供的集合预报资料进行多模式
The semantic similarity ensemble
Andrea Ballatore
2013-12-01
Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.
Wakefield, M. E.
1982-01-01
Protective garment ensemble with internally-mounted environmental- control unit contains its own air supply. Alternatively, a remote-environmental control unit or an air line is attached at the umbilical quick disconnect. Unit uses liquid air that is vaporized to provide both breathing air and cooling. Totally enclosed garment protects against toxic substances.
Music Ensemble: Course Proposal.
Kovach, Brian
A proposal is presented for a Music Ensemble course to be offered at the Community College of Philadelphia for music students who have had previous vocal or instrumental training. A standardized course proposal cover form is followed by a statement of purpose for the course, a list of major course goals, a course outline, and a bibliography. Next,…
Hansen, Lars Kai; Salamon, Peter
1990-01-01
We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....
Multilevel ensemble Kalman filtering
Hoel, Haakon
2016-01-08
The ensemble Kalman filter (EnKF) is a sequential filtering method that uses an ensemble of particle paths to estimate the means and covariances required by the Kalman filter by the use of sample moments, i.e., the Monte Carlo method. EnKF is often both robust and efficient, but its performance may suffer in settings where the computational cost of accurate simulations of particles is high. The multilevel Monte Carlo method (MLMC) is an extension of classical Monte Carlo methods which by sampling stochastic realizations on a hierarchy of resolutions may reduce the computational cost of moment approximations by orders of magnitude. In this work we have combined the ideas of MLMC and EnKF to construct the multilevel ensemble Kalman filter (MLEnKF) for the setting of finite dimensional state and observation spaces. The main ideas of this method is to compute particle paths on a hierarchy of resolutions and to apply multilevel estimators on the ensemble hierarchy of particles to compute Kalman filter means and covariances. Theoretical results and a numerical study of the performance gains of MLEnKF over EnKF will be presented. Some ideas on the extension of MLEnKF to settings with infinite dimensional state spaces will also be presented.
Nonextensivity in magnetic nanoparticle ensembles
Binek, Ch.; Polisetty, S.; He, Xi; Mukherjee, T.; Rajesh, R.; Redepenning, J.
2006-08-01
A superconducting quantum interference device and Faraday rotation technique are used to study dipolar interacting nanoparticles embedded in a polystyrene matrix. Magnetization isotherms are measured for three cylindrically shaped samples of constant diameter but various heights. Detailed analysis of the isotherms supports Tsallis’ conjecture of a magnetic equation of state that involves temperature and magnetic field variables scaled by the logarithm of the number of magnetic nanoparticles. This unusual scaling of thermodynamic variables, which are conventionally considered to be intensive, originates from the nonextensivity of the Gibbs free energy in three-dimensional dipolar interacting particle ensembles. Our experimental evidence for nonextensivity is based on the data collapse of various isotherms that require scaling of the field variable in accordance with Tsallis’ equation of state.
Basu, A.; Das, B.; Middya, T. R.; Bhattacharya, D. P.
2017-02-01
The rate of loss of energy of the non-equilibrium electrons to the acoustic mode lattice vibration in a degenerate semiconductor is obtained under the condition, when the lattice temperature is low enough, so that the traditional approximations like the elastic nature of the electron-phonon collisions and the truncation of the phonon distribution to the equipartition law are not valid any more. Using the results of the energy loss rate, the non-ohmic mobility is then calculated. Evaluating the loss rate and the non-ohmic mobility in degenerate samples of Si and Ge we find that significant changes in both the characteristics have been effected compared to that in the non-degenerate samples, in the regime of lower energy and for relatively lower fields. The effected changes are more significant the lower the lattice temperature is.
Effective Visualization of Temporal Ensembles.
Hao, Lihua; Healey, Christopher G; Bass, Steffen A
2016-01-01
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.
Imprinting and recalling cortical ensembles.
Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael
2016-08-12
Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. Copyright © 2016, American Association for the Advancement of Science.
Multilevel ensemble Kalman filtering
Hoel, Hakon
2016-06-14
This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Staying Thermal with Hartree Ensemble Approximations
Salle, M; Vink, Jeroen C
2000-01-01
Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.
Critical behavior in topological ensembles
Bulycheva, K; Nechaev, S
2014-01-01
We consider the relation between three physical problems: 2D directed lattice random walks in an external magnetic field, ensembles of torus knots, and 5d Abelian SUSY gauge theory with massless hypermultiplet in $\\Omega$ background. All these systems exhibit the critical behavior typical for the "area+length" statistics of grand ensembles of 2D directed paths. In particular, using the combinatorial description, we have found the new critical behavior in the ensembles of the torus knots and in the instanton ensemble in 5d gauge theory. The relation with the integrable model is discussed.
Decadal climate predictions improved by ocean ensemble dispersion filtering
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.type="synopsis">type="main">Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the
ESPC Coupled Global Ensemble Design
2014-09-30
coupled system infrastructure and forecasting capabilities. Initial operational capability is targeted for 2018. APPROACH 1. It is recognized...provided will be the probability distribution function (PDF) of environmental conditions. It is expected that this distribution will have skill. To...system would be the initial capability for ensemble forecasts . Extensions to fully coupled ensembles would be the next step. 2. Develop an extended
Botnet analysis using ensemble classifier
Anchit Bijalwan
2016-09-01
Full Text Available This paper analyses the botnet traffic using Ensemble of classifier algorithm to find out bot evidence. We used ISCX dataset for training and testing purpose. We extracted the features of both training and testing datasets. After extracting the features of this dataset, we bifurcated these features into two classes, normal traffic and botnet traffic and provide labelling. Thereafter using modern data mining tool, we have applied ensemble of classifier algorithm. Our experimental results show that the performance for finding bot evidence using ensemble of classifiers is better than single classifier. Ensemble based classifiers perform better than single classifier by either combining powers of multiple algorithms or introducing diversification to the same classifier by varying input in bot analysis. Our results are showing that by using voting method of ensemble based classifier accuracy is increased up to 96.41% from 93.37%.
Statistical ensembles for money and debt
Viaggiu, Stefano; Lionetto, Andrea; Bargigli, Leonardo; Longo, Michele
2012-10-01
We build a statistical ensemble representation of two economic models describing respectively, in simplified terms, a payment system and a credit market. To this purpose we adopt the Boltzmann-Gibbs distribution where the role of the Hamiltonian is taken by the total money supply (i.e. including money created from debt) of a set of interacting economic agents. As a result, we can read the main thermodynamic quantities in terms of monetary ones. In particular, we define for the credit market model a work term which is related to the impact of monetary policy on credit creation. Furthermore, with our formalism we recover and extend some results concerning the temperature of an economic system, previously presented in the literature by considering only the monetary base as a conserved quantity. Finally, we study the statistical ensemble for the Pareto distribution.
Staying thermal with Hartree ensemble approximations
Salle, Mischa E-mail: msalle@science.uva.nl; Smit, Jan E-mail: jsmit@science.uva.nl; Vink, Jeroen C. E-mail: jcvink@science.uva.nl
2002-03-25
We study thermal behavior of a recently introduced Hartree ensemble approximation, which allows for non-perturbative inhomogeneous field configurations as well as for approximate thermalization, in the phi (cursive,open) Greek{sup 4} model in 1+1 dimensions. Using ensembles with a free field thermal distribution as out-of-equilibrium initial conditions we determine thermalization time scales. The time scale for which the system stays in approximate quantum thermal equilibrium is an indication of the time scales for which the approximation method stays reasonable. This time scale turns out to be two orders of magnitude larger than the time scale for thermalization, in the range of couplings and temperatures studied. We also discuss simplifications of our method which are numerically more efficient and make a comparison with classical dynamics.
Entanglement in a Solid State Spin Ensemble
Simmons, Stephanie; Riemann, Helge; Abrosimov, Nikolai V; Becker, Peter; Pohl, Hans-Joachim; Thewalt, Mike L W; Itoh, Kohei M; Morton, John J L
2010-01-01
Entanglement is the quintessential quantum phenomenon and a necessary ingredient in most emerging quantum technologies, including quantum repeaters, quantum information processing (QIP) and the strongest forms of quantum cryptography. Spin ensembles, such as those in liquid state nuclear magnetic resonance, have been powerful in the development of quantum control methods, however, these demonstrations contained no entanglement and ultimately constitute classical simulations of quantum algorithms. Here we report the on-demand generation of entanglement between an ensemble of electron and nuclear spins in isotopically engineered phosphorus-doped silicon. We combined high field/low temperature electron spin resonance (3.4 T, 2.9 K) with hyperpolarisation of the 31P nuclear spin to obtain an initial state of sufficient purity to create a non-classical, inseparable state. The state was verified using density matrix tomography based on geometric phase gates, and had a fidelity of 98% compared with the ideal state a...
On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles
Luo, Xiaodong
2010-09-19
The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].
Long-range interacting systems in the unconstrained ensemble
Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano
2017-01-01
Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.
Long-range interacting systems in the unconstrained ensemble.
Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano
2017-01-01
Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.
Nonextensivity in Magnetic Nanocluster Ensembles
Binek, Christian; Polisetty, Srinivas; He, Xi; Mukherjee, Tathagata; Rajasekeran, Rajesh; Redepenning, Jody
2006-03-01
We study the scaling behavior of dipolar interacting nanoparticles in 3D samples of various sizes but constant particle density. Ferromagnetic γ-Fe2O3 clusters embedded in a polystyrene matrix are fabricated by thermal decomposition of metal carbonyls. Transmission electron microscopy reveals a narrow size distribution of 12 nm clusters. They are randomly dispersed in the matrix with an average separation of 80 nm. Magnetization isotherms of these single domain particle ensembles are measured by SQUID magnetometry above the blocking temperature TB =115K where non-equilibrium effects are avoided. After demagnetization corrections which convert the applied magnetic fields into internal fields, H, a data collapse is achieved when scaling the magnetic moment, m, and H by appropriate factors. The latter are theoretically predicted functions of the number of particles and determined here numerically. Scaling of H takes into account the nonextensive (NE) behavior of dipolar interacting particles. In the case of long range interactions a scaling schema has been proposed by Tsallis and confirmed by simulations. The controversial field of NE thermodynamics requires however experimental evidence provided here.
Adiabatically deformed ensemble: Engineering nonthermal states of matter
Kennes, D. M.
2017-07-01
We propose a route towards engineering nonthermal states of matter, which show largely unexplored physics. The main idea relies on the adiabatic passage of a thermal ensemble under slow variations of the system Hamiltonian. If the temperature of the initial thermal ensemble is either zero or infinite, the ensemble after the passage is a simple thermal one with the same vanishing or infinite temperature. However, for any finite nonzero temperature, intriguing nonthermal ensembles can be achieved. We exemplify this in (a) a single oscillator, (b) a dimerized interacting one-dimensional chain of spinless fermions, (c) a BCS-type superconductor, and (d) the topological Kitaev chain. We solve these models with a combination of methods: either exactly, numerically using the density matrix renormalization group, or within an approximate functional renormalization group scheme. The designed states show strongly nonthermal behavior in each of the considered models. For example, for the chain of spinless fermions we exemplify how long-ranged nonthermal power-law correlations can be stabilized, and for the Kitaev chain we elucidate how the nonthermal ensemble can largely alter the transition temperature separating topological and trivial phases.
Comparison of four ensemble methods combining regional climate simulations over Asia
Feng, Jinming; Lee, Dong-Kyou; Fu, Congbin; Tang, Jianping; Sato, Yasuo; Kato, Hisashi; McGregor, John L.; Mabuchi, Kazuo
2011-02-01
A number of uncertainties exist in climate simulation because the results of climate models are influenced by factors such as their dynamic framework, physical processes, initial and driving fields, and horizontal and vertical resolution. The uncertainties of the model results may be reduced, and the credibility can be improved by employing multi-model ensembles. In this paper, multi-model ensemble results using 10-year simulations of five regional climate models (RCMs) from December 1988 to November 1998 over Asia are presented and compared. The simulation results are derived from phase II of the Regional Climate Model Inter-comparison Project (RMIP) for Asia. Using the methods of the arithmetic mean, the weighted mean, multivariate linear regression, and singular value decomposition, the ensembles for temperature, precipitation, and sea level pressure are carried out. The results show that the multi-RCM ensembles outperform the single RCMs in many aspects. Among the four ensemble methods used, the multivariate linear regression, based on the minimization of the root mean square errors, significantly improved the ensemble results. With regard to the spatial distribution of the mean climate, the ensemble result for temperature was better than that for precipitation. With an increasing number of models used in the ensembles, the ensemble results were more accurate. Therefore, a multi-model ensemble is an efficient approach to improve the results of regional climate simulations.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
2004-01-01
Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output...... is available via two password-protected web-pages hosted at IMM and is used daily by Elsam and E2....
Similarity measures for protein ensembles
Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper
2009-01-01
Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...
2004-01-01
Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output is a...... is available via two password-protected web-pages hosted at IMM and is used daily by Elsam and E2....
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach
Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Skolkovo Inst. of Science and Technology, Moscow (Russia); Chernyak, Vladimir [Wayne State Univ., Detroit, MI (United States). Dept. of Chemistry
2017-01-17
Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxation and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
J. C. Hargreaves
2011-03-01
Full Text Available We investigate the consistency of various ensembles of model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day, however, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
J. C. Hargreaves
2011-08-01
Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Monthly Ensembles in Algal Bloom Predictions on the Baltic Sea
Roiha, Petra; Westerlund, Antti; Stipa, Tapani
2010-05-01
In this work we explore the statistical features of monthly ensembles and their capability to predict biogeochemical conditions in the Baltic Sea. Operational marine environmental modelling has been considered hard, and consequently there are very few operational ecological models. Operational modelling of harmful algal blooms is harder still, since it is difficult to separate the algal species in models, and in general, very little is known of HAB properties. We present results of an ensemble approach to HAB forecasting in the Baltic, and discuss the applicability of the forecasting method to biochemical modelling. It turns out that HABs are indeed possible to forecast with useful accuracy. For modelling the algal blooms in Baltic Sea we used FMI operational 3-dimensional biogeochemical model to produce seasonal ensemble forecasts for different physical, chemical and biological variables. The modelled variables were temperature, salinity, velocity, silicate, phosphate, nitrate, diatoms, flagellates and two species of potentially toxic filamentous cyanobacteria nodularia spumigena and aphanizomenon flos-aquae. In this work we concentrate to the latter two. Ensembles were produced by running the biogeochemical model several times and forcing it on every run with different set of seasonal weather parameters from ECMWF's mathematically perturbed ensemble prediction forecasts. The ensembles were then analysed by statistical methods and the median, quartiles, minimum and maximum values were calculated for estimating the probable amounts of algae. Validation for the forecast method was made by comparing the final results against available and valid in-situ HAB data.
Matrix product purifications for canonical ensembles and quantum number distributions
Barthel, Thomas
2016-09-01
Matrix product purifications (MPPs) are a very efficient tool for the simulation of strongly correlated quantum many-body systems at finite temperatures. When a system features symmetries, these can be used to reduce computation costs substantially. It is straightforward to compute an MPP of a grand-canonical ensemble, also when symmetries are exploited. This paper provides and demonstrates methods for the efficient computation of MPPs of canonical ensembles under utilization of symmetries. Furthermore, we present a scheme for the evaluation of global quantum number distributions using matrix product density operators (MPDOs). We provide exact matrix product representations for canonical infinite-temperature states, and discuss how they can be constructed alternatively by applying matrix product operators to vacuum-type states or by using entangler Hamiltonians. A demonstration of the techniques for Heisenberg spin-1 /2 chains explains why the difference in the energy densities of canonical and grand-canonical ensembles decays as 1 /L .
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach.
Chertkov, Michael; Chernyak, Vladimir
2017-08-17
Thermostatically controlled loads, e.g., air conditioners and heaters, are by far the most widespread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control - changing from on to off, and vice versa, depending on temperature. We considered aggregation of a large group of similar devices into a statistical ensemble, where the devices operate following the same dynamics, subject to stochastic perturbations and randomized, Poisson on/off switching policy. Using theoretical and computational tools of statistical physics, we analyzed how the ensemble relaxes to a stationary distribution and established a relationship between the relaxation and the statistics of the probability flux associated with devices' cycling in the mixed (discrete, switch on/off, and continuous temperature) phase space. This allowed us to derive the spectrum of the non-equilibrium (detailed balance broken) statistical system and uncover how switching policy affects oscillatory trends and the speed of the relaxation. Relaxation of the ensemble is of practical interest because it describes how the ensemble recovers from significant perturbations, e.g., forced temporary switching off aimed at utilizing the flexibility of the ensemble to provide "demand response" services to change consumption temporarily to balance a larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.
Lei, Lili; Whitaker, Jeffrey S.
2017-06-01
The current NCEP operational four-dimensional ensemble-variational data assimilation system uses a control forecast at T1534 resolution coupled with an 80 member ensemble at T574 resolution. Given an increase in computing resources, and assuming the control forecast resolution is fixed, would it be better to increase the ensemble size and keep the ensemble resolution the same, or increase the ensemble resolution and keep the ensemble size the same? To answer this question, experiments are conducted at reduced resolutions. Two sets of experiments are conducted which both use approximately four times more computational resources than the control experiment that uses a control forecast at T670 and an 80 member ensemble at T254. One increases the ensemble size to 320 but keeps the ensemble resolution at T254; and the other increases the ensemble resolution to T670 but retains an 80 ensemble size. When ensemble size increases to 320, turning off the static component of the background-error covariance does not degrade performance. When the data assimilation parameters are tuned for optimal performance, increasing either ensemble size or ensemble resolution can improve the forecast performance. Increasing ensemble resolution is slightly, but significantly better than increasing ensemble size for these experiments, particularly when considering errors at smaller scales. Much of the benefit of increasing ensemble resolution comes about by eliminating the need for a deterministic control forecast and running all of the background forecasts at the same resolution. In this "single-resolution" mode, the control forecast is replaced by an ensemble average, which reduces small-scale errors significantly.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
CME Ensemble Forecasting - A Primer
Pizzo, V. J.; de Koning, C. A.; Cash, M. D.; Millward, G. H.; Biesecker, D. A.; Codrescu, M.; Puga, L.; Odstrcil, D.
2014-12-01
SWPC has been evaluating various approaches for ensemble forecasting of Earth-directed CMEs. We have developed the software infrastructure needed to support broad-ranging CME ensemble modeling, including composing, interpreting, and making intelligent use of ensemble simulations. The first step is to determine whether the physics of the interplanetary propagation of CMEs is better described as chaotic (like terrestrial weather) or deterministic (as in tsunami propagation). This is important, since different ensemble strategies are to be pursued under the two scenarios. We present the findings of a comprehensive study of CME ensembles in uniform and structured backgrounds that reveals systematic relationships between input cone parameters and ambient flow states and resulting transit times and velocity/density amplitudes at Earth. These results clearly indicate that the propagation of single CMEs to 1 AU is a deterministic process. Thus, the accuracy with which one can forecast the gross properties (such as arrival time) of CMEs at 1 AU is determined primarily by the accuracy of the inputs. This is no tautology - it means specifically that efforts to improve forecast accuracy should focus upon obtaining better inputs, as opposed to developing better propagation models. In a companion paper (deKoning et al., this conference), we compare in situ solar wind data with forecast events in the SWPC operational archive to show how the qualitative and quantitative findings presented here are entirely consistent with the observations and may lead to improved forecasts of arrival time at Earth.
Estimating preselected and postselected ensembles
Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)
2011-11-15
In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.
Linking neuronal ensembles by associative synaptic plasticity.
Qi Yuan
Full Text Available Synchronized activity in ensembles of neurons recruited by excitatory afferents is thought to contribute to the coding information in the brain. However, the mechanisms by which neuronal ensembles are generated and modified are not known. Here we show that in rat hippocampal slices associative synaptic plasticity enables ensembles of neurons to change by incorporating neurons belonging to different ensembles. Associative synaptic plasticity redistributes the composition of different ensembles recruited by distinct inputs such as to specifically increase the similarity between the ensembles. These results show that in the hippocampus, the ensemble of neurons recruited by a given afferent projection is fluid and can be rapidly and persistently modified to specifically include neurons from different ensembles. This linking of ensembles may contribute to the formation of associative memories.
A mollified Ensemble Kalman filter
Bergemann, Kay
2010-01-01
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various methods have been devised to continuously spread out the analysis increments over a fixed time interval centered about analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
Excitation energies from ensemble DFT
Borgoo, Alex; Teale, Andy M.; Helgaker, Trygve
2015-12-01
We study the evaluation of the Gross-Oliveira-Kohn expression for excitation energies E1-E0=ɛ1-ɛ0+∂E/xc,w[ρ] ∂w | ρ =ρ0. This expression gives the difference between an excitation energy E1 - E0 and the corresponding Kohn-Sham orbital energy difference ɛ1 - ɛ0 as a partial derivative of the exchange-correlation energy of an ensemble of states Exc,w[ρ]. Through Lieb maximisation, on input full-CI density functions, the exchange-correlation energy is evaluated accurately and the partial derivative is evaluated numerically using finite difference. The equality is studied numerically for different geometries of the H2 molecule and different ensemble weights. We explore the adiabatic connection for the ensemble exchange-correlation energy. The latter may prove useful when modelling the unknown weight dependence of the exchange-correlation energy.
The Partition Ensemble Fallacy Fallacy
Nemoto, K; Nemoto, Kae; Braunstein, Samuel L.
2002-01-01
The Partition Ensemble Fallacy was recently applied to claim no quantum coherence exists in coherent states produced by lasers. We show that this claim relies on an untestable belief of a particular prior distribution of absolute phase. One's choice for the prior distribution for an unobservable quantity is a matter of `religion'. We call this principle the Partition Ensemble Fallacy Fallacy. Further, we show an alternative approach to construct a relative-quantity Hilbert subspace where unobservability of certain quantities is guaranteed by global conservation laws. This approach is applied to coherent states and constructs an approximate relative-phase Hilbert subspace.
Molecular Dynamics Simulation of Glass Transition Behavior of Polyimide Ensemble
无
2001-01-01
The effect of chromophores to the glass transition temperature of polyimide ensemble has been investigated by means of molecular dynamics simulation in conjunction with barrier analysis. Simulated Tg results indicated a good agreement with experimental value. This study showed the MD simulation could estimate the effect of chromophores to the Tg of polyimide ensemble conveniently and an estimation approach method had a surprising deviation of Tg from experiment. At the same time, a polyimide structure with higher barrier energy was designed and validated by MD simulation.
Deviations from Wick's theorem in the canonical ensemble
Schönhammer, K.
2017-07-01
Wick's theorem for the expectation values of products of field operators for a system of noninteracting fermions or bosons plays an important role in the perturbative approach to the quantum many-body problem. A finite-temperature version holds in the framework of the grand canonical ensemble, but not for the canonical ensemble appropriate for systems with fixed particle number such as ultracold quantum gases in optical lattices. Here we present formulas for expectation values of products of field operators in the canonical ensemble using a method in the spirit of Gaudin's proof of Wick's theorem for the grand canonical case. The deviations from Wick's theorem are examined quantitatively for two simple models of noninteracting fermions.
Multimodel ensembles of wheat growth
Martre, Pierre; Wallach, Daniel; Asseng, Senthold
2015-01-01
, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24...
Ensemble size impact on the decadal predictive skill assessment
Frank Sienz
2016-12-01
Full Text Available Retrospective prediction experiments have to be performed to estimate the skill of decadal prediction systems. These are necessarily restricted in the number due to the computational constraints. From weather and seasonal prediction it is known that the ensemble size is crucial to yield reliable predictions. Differences are expected for decadal predictions due to the differing time-scales of the involved processes and the longer prediction horizon. A conceptual model is applied that enables the systematic analysis of ensemble size dependencies in a framework close to that of decadal predictions. Differences are quantified in terms of the confidence intervals coverage and the power of statistical tests for prediction scores. In addition, the concepts are applied to decadal predicitions of the MiKlip Baseline1 system. It is shown that small ensemble, as well as hindcast sample sizes lead to biased test performances in a way that the detection of a present prediction skill is hampered. Experiments with ensemble sizes smaller than 10 are not recommended to evaluate decadal prediction skill or as basis for the prediction system developement. For regions with low signal-to-noise ratios much larger ensembles are required and it is shown that in this case successful decadal predictions are possible for the Central European summer temperatures.
Global Ensemble Forecast System (GEFS) [1 Deg.
National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...
Bracegirdle, Thomas J. [British Antarctic Survey, Cambridge (United Kingdom); Stephenson, David B. [University of Exeter, Mathematics Research Institute, Exeter (United Kingdom); NCAS-Climate, Reading (United Kingdom)
2012-12-15
This study presents projections of twenty-first century wintertime surface temperature changes over the high-latitude regions based on the third Coupled Model Inter-comparison Project (CMIP3) multi-model ensemble. The state-dependence of the climate change response on the present day mean state is captured using a simple yet robust ensemble linear regression model. The ensemble regression approach gives different and more precise estimated mean responses compared to the ensemble mean approach. Over the Arctic in January, ensemble regression gives less warming than the ensemble mean along the boundary between sea ice and open ocean (sea ice edge). Most notably, the results show 3 C less warming over the Barents Sea ({proportional_to} 7 C compared to {proportional_to} 10 C). In addition, the ensemble regression method gives projections that are 30 % more precise over the Sea of Okhostk, Bering Sea and Labrador Sea. For the Antarctic in winter (July) the ensemble regression method gives 2 C more warming over the Southern Ocean close to the Greenwich Meridian ({proportional_to} 7 C compared to {proportional_to} 5 C). Projection uncertainty was almost half that of the ensemble mean uncertainty over the Southern Ocean between 30 W to 90 E and 30 % less over the northern Antarctic Peninsula. The ensemble regression model avoids the need for explicit ad hoc weighting of models and exploits the whole ensemble to objectively identify overly influential outlier models. Bootstrap resampling shows that maximum precision over the Southern Ocean can be obtained with ensembles having as few as only six climate models. (orig.)
Squeezing of Collective Excitations in Spin Ensembles
Kraglund Andersen, Christian; Mølmer, Klaus
2012-01-01
We analyse the possibility to create two-mode spin squeezed states of two separate spin ensembles by inverting the spins in one ensemble and allowing spin exchange between the ensembles via a near resonant cavity field. We investigate the dynamics of the system using a combination of numerical an...
Trends in the predictive performance of raw ensemble weather forecasts
Hemri, Stephan; Scheuerer, Michael; Pappenberger, Florian; Bogner, Konrad; Haiden, Thomas
2015-04-01
Over the last two decades the paradigm in weather forecasting has shifted from being deterministic to probabilistic. Accordingly, numerical weather prediction (NWP) models have been run increasingly as ensemble forecasting systems. The goal of such ensemble forecasts is to approximate the forecast probability distribution by a finite sample of scenarios. Global ensemble forecast systems, like the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble, are prone to probabilistic biases, and are therefore not reliable. They particularly tend to be underdispersive for surface weather parameters. Hence, statistical post-processing is required in order to obtain reliable and sharp forecasts. In this study we apply statistical post-processing to ensemble forecasts of near-surface temperature, 24-hour precipitation totals, and near-surface wind speed from the global ECMWF model. Our main objective is to evaluate the evolution of the difference in skill between the raw ensemble and the post-processed forecasts. The ECMWF ensemble is under continuous development, and hence its forecast skill improves over time. Parts of these improvements may be due to a reduction of probabilistic bias. Thus, we first hypothesize that the gain by post-processing decreases over time. Based on ECMWF forecasts from January 2002 to March 2014 and corresponding observations from globally distributed stations we generate post-processed forecasts by ensemble model output statistics (EMOS) for each station and variable. Parameter estimates are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over rolling training periods that consist of the n days preceding the initialization dates. Given the higher average skill in terms of CRPS of the post-processed forecasts for all three variables, we analyze the evolution of the difference in skill between raw ensemble and EMOS forecasts. The fact that the gap in skill remains almost constant over time, especially for near
Classical and Quantum Ensembles via Multiresolution. II. Wigner Ensembles
2004-01-01
We present the application of the variational-wavelet analysis to the analysis of quantum ensembles in Wigner framework. (Naive) deformation quantization, the multiresolution representations and the variational approach are the key points. We construct the solutions of Wigner-like equations via the multiscale expansions in the generalized coherent states or high-localized nonlinear eigenmodes in the base of the compactly supported wavelets and the wavelet packets. We demonstrate the appearanc...
Hydrological Ensemble Prediction System (HEPS)
Thielen-Del Pozo, J.; Schaake, J.; Martin, E.; Pailleux, J.; Pappenberger, F.
2010-09-01
Flood forecasting systems form a key part of ‘preparedness' strategies for disastrous floods and provide hydrological services, civil protection authorities and the public with information of upcoming events. Provided the warning leadtime is sufficiently long, adequate preparatory actions can be taken to efficiently reduce the impacts of the flooding. Following on the success of the use of ensembles for weather forecasting, the hydrological community now moves increasingly towards Hydrological Ensemble Prediction Systems (HEPS) for improved flood forecasting using operationally available NWP products as inputs. However, these products are often generated on relatively coarse scales compared to hydrologically relevant basin units and suffer systematic biases that may have considerable impact when passed through the non-linear hydrological filters. Therefore, a better understanding on how best to produce, communicate and use hydrologic ensemble forecasts in hydrological short-, medium- und long term prediction of hydrological processes is necessary. The "Hydrologic Ensemble Prediction Experiment" (HEPEX), is an international initiative consisting of hydrologists, meteorologist and end-users to advance probabilistic hydrologic forecast techniques for flood, drought and water management applications. Different aspects of the hydrological ensemble processor are being addressed including • Production of useful meteorological products relevant for hydrological applications, ranging from nowcasting products to seasonal forecasts. The importance of hindcasts that are consistent with the operational weather forecasts will be discussed to support bias correction and downscaling, statistically meaningful verification of HEPS, and the development and testing of operating rules; • Need for downscaling and post-processing of weather ensembles to reduce bias before entering hydrological applications; • Hydrological model and parameter uncertainty and how to correct and
Spectral diagonal ensemble Kalman filters
Kasanický, Ivan; Vejmelka, Martin
2015-01-01
A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.
Symanzik flow on HISQ ensembles
Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-01-01
We report on a scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The lattice scale $w_0/a$, originally proposed by the BMW collaboration, is computed using Symanzik flow at four lattice spacings ranging from 0.15 to 0.06 fm. With a Taylor series ansatz, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We give a preliminary determination of the scale $w_0$ in physical units, along with associated systematic errors, and compare with results from other groups. We also present a first estimate of autocorrelation lengths as a function of flowtime for these ensembles.
Statistical Analysis of Protein Ensembles
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Statistical Analysis of Protein Ensembles
Gabriell eMáté
2014-04-01
Full Text Available As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Classical and Quantum Ensembles via Multiresolution. II. Wigner Ensembles
Fedorova, A N; Fedorova, Antonina N.; Zeitlin, Michael G.
2004-01-01
We present the application of the variational-wavelet analysis to the analysis of quantum ensembles in Wigner framework. (Naive) deformation quantization, the multiresolution representations and the variational approach are the key points. We construct the solutions of Wigner-like equations via the multiscale expansions in the generalized coherent states or high-localized nonlinear eigenmodes in the base of the compactly supported wavelets and the wavelet packets. We demonstrate the appearance of (stable) localized patterns (waveletons) and consider entanglement and decoherence as possible applications.
2012-01-01
Licence; En 1935, un groupe de mathématiciens français eut l'ambition de reconstruire tout l'édifice mathématique (sans S pour bien montrer l'unité) selon la pensée formaliste de Hilbert. Les membres fondateurs ont été Henri Cartan, Claude Chevalley, Jean Delsarte, Jean Dieudonné, André Weil auxquels se joindra René de Possel. En juillet 1935 fut donc créé, lors d'un séminaire en Auvergne le groupe 'Nicolas Bourbaki'. Le nom de cette association fait référence en fait à une anecdote qui se pa...
Ensemble meteorological reconstruction using circulation analogues of 1781–1785
P. Yiou
2013-09-01
Full Text Available This paper uses a method of atmospheric flow analogues to reconstruct an ensemble of atmospheric variables (namely sea-level pressure, surface temperature and wind speed between 1781 and 1785. The properties of this ensemble are investigated and tested against observations of temperature. The goal of the paper is to assess whether the atmospheric circulation during the Laki volcanic eruption (in 1783 and the subsequent winter were similar to the conditions that prevailed in the winter 2009/2010 and during spring 2010. We find that the three months following the Laki eruption in June 1783 barely have analogues in 2010. The cold winter of 1783/1784 yields circulation analogues in 2009/2010. Our analysis suggests that it is unlikely that the Laki eruption was responsible for the cold winter of 1783/1784, of the relatively short memory of the atmospheric circulation.
Non-Boltzmann Ensembles and Monte Carlo Simulations
Murthy, K. P. N.
2016-10-01
Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g
Enhancing COSMO-DE ensemble forecasts by inexpensive techniques
Zied Ben Bouallègue
2013-02-01
Full Text Available COSMO-DE-EPS, a convection-permitting ensemble prediction system based on the high-resolution numerical weather prediction model COSMO-DE, is pre-operational since December 2010, providing probabilistic forecasts which cover Germany. This ensemble system comprises 20 members based on variations of the lateral boundary conditions, the physics parameterizations and the initial conditions. In order to increase the sample size in a computationally inexpensive way, COSMO-DE-EPS is combined with alternative ensemble techniques: the neighborhood method and the time-lagged approach. Their impact on the quality of the resulting probabilistic forecasts is assessed. Objective verification is performed over a six months period, scores based on the Brier score and its decomposition are shown for June 2011. The combination of the ensemble system with the alternative approaches improves probabilistic forecasts of precipitation in particular for high precipitation thresholds. Moreover, combining COSMO-DE-EPS with only the time-lagged approach improves the skill of area probabilities for precipitation and does not deteriorate the skill of 2 m-temperature and wind gusts forecasts.
Analysis of mesoscale forecasts using ensemble methods
Gross, Markus
2016-01-01
Mesoscale forecasts are now routinely performed as elements of operational forecasts and their outputs do appear convincing. However, despite their realistic appearance at times the comparison to observations is less favorable. At the grid scale these forecasts often do not compare well with observations. This is partly due to the chaotic system underlying the weather. Another key problem is that it is impossible to evaluate the risk of making decisions based on these forecasts because they do not provide a measure of confidence. Ensembles provide this information in the ensemble spread and quartiles. However, running global ensembles at the meso or sub mesoscale involves substantial computational resources. National centers do run such ensembles, but the subject of this publication is a method which requires significantly less computation. The ensemble enhanced mesoscale system presented here aims not at the creation of an improved mesoscale forecast model. Also it is not to create an improved ensemble syste...
Measuring social interaction in music ensembles.
Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-05-05
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances.
Gibbs Ensembles of Nonintersecting Paths
Borodin, Alexei
2008-01-01
We consider a family of determinantal random point processes on the two-dimensional lattice and prove that members of our family can be interpreted as a kind of Gibbs ensembles of nonintersecting paths. Examples include probability measures on lozenge and domino tilings of the plane, some of which are non-translation-invariant. The correlation kernels of our processes can be viewed as extensions of the discrete sine kernel, and we show that the Gibbs property is a consequence of simple linear relations satisfied by these kernels. The processes depend on infinitely many parameters, which are closely related to parametrization of totally positive Toeplitz matrices.
Wind Power Prediction using Ensembles
Giebel, Gregor; Badger, Jake; Landberg, Lars
2005-01-01
offshore wind farm and the whole Jutland/Funen area. The utilities used these forecasts for maintenance planning, fuel consumption estimates and over-the-weekend trading on the Leipzig power exchange. Othernotable scientific results include the better accuracy of forecasts made up from a simple...... superposition of two NWP provider (in our case, DMI and DWD), an investigation of the merits of a parameterisation of the turbulent kinetic energy within thedelivered wind speed forecasts, and the finding that a “naïve” downscaling of each of the coarse ECMWF ensemble members with higher resolution HIRLAM did...
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
Quantum Repeaters and Atomic Ensembles
Borregaard, Johannes
a previous protocol, thereby enabling fast local processing, which greatly enhances the distribution rate. We then move on to describe our work on improving the stability of atomic clocks using entanglement. Entanglement can potentially push the stability of atomic clocks to the so-called Heisenberg limit......, which is the absolute upper limit of the stability allowed by the Heisenberg uncertainty relation. It has, however, been unclear whether entangled state’s enhanced sensitivity to noise would prevent reaching this limit. We have developed an adaptive measurement protocol, which circumvents this problem...... based on atomic ensembles....
A Localized Ensemble Kalman Smoother
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
A model for luminescence of localized state ensemble
Li, Q.; Xu, S. J.; Xie, M H; Tong, S. Y.
2004-01-01
A distribution function for localized carriers, $f(E,T)=\\frac{1}{e^{(E-E_a)/k_BT}+\\tau_{tr}/\\tau_r}$, is proposed by solving a rate equation, in which, electrical carriers' generation, thermal escape, recapture and radiative recombination are taken into account. Based on this distribution function, a model is developed for luminescence from localized state ensemble with a Gaussian-type density of states. The model reproduces quantitatively all the anomalous temperature behaviors of localized ...
Heterogeneous versus Homogeneous Machine Learning Ensembles
Petrakova Aleksandra
2015-12-01
Full Text Available The research demonstrates efficiency of the heterogeneous model ensemble application for a cancer diagnostic procedure. Machine learning methods used for the ensemble model training are neural networks, random forest, support vector machine and offspring selection genetic algorithm. Training of models and the ensemble design is performed by means of HeuristicLab software. The data used in the research have been provided by the General Hospital of Linz, Austria.
Interpreting Tree Ensembles with inTrees
Deng, Houtao
2014-01-01
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...
Analysis of peeling decoder for MET ensembles
Hinton, Ryan
2009-01-01
The peeling decoder introduced by Luby, et al. allows analysis of LDPC decoding for the binary erasure channel (BEC). For irregular ensembles, they analyze the decoder state as a Markov process and present a solution to the differential equations describing the process mean. Multi-edge type (MET) ensembles allow greater precision through specifying graph connectivity. We generalize the the peeling decoder for MET ensembles and derive analogous differential equations. We offer a new change of variables and solution to the node fraction evolutions in the general (MET) case. This result is preparatory to investigating finite-length ensemble behavior.
A "Dressed" Ensemble Kalman Filter Using the Hybrid Coordinate Ocean Model in the Pacific
WAN Liying; ZHU Jiang; WANG Hui; YAN Changxiang; Laurent BERTINO
2009-01-01
The computational cost required by the Ensemble Kalman Filter (EnKF) is much larger than that of some simpler assimilation schemes,such as Optimal Interpolation (OI) or three-dimension variational assimilation (3DVAR).Ensemble optimal interpolation (EnOI),a crudely simplified implementation of EnKF,is sometimes used as a substitute in some oceanic applications and requires much less computational time than EnKF.In this paper,to compromise between computational cost and dynamic covaxiance,we use the idea of "dressing" a small size dynamical ensemble with a larger number of static ensembles in order to form an approximate dynamic covaxiance.The term "dressing" means that a dynamical ensemble seed from model runs is perturbed by adding the anomalies of some static ensembles.This dressing EnKF (DrEnKF for short) scheme is tested in assimilation of real altimetry data in the Pacific using the HYbrid Coordinate Ocean Model (HYCOM) over a four-year period.Ten dynamical ensemble seeds are each dressed by 10 static ensemble members selected from a 100-member static ensemble.Results are compared to two EnKF assimilation runs that use 10 and 100 dynamical ensemble members.Both temperature and salinity fields from the DrEnKF and the EnKF are compared to observations from Argo floats and an OI SST dataset.The results show that the DrEnKF and the 100-member EnKF yield similar root mean square errors (RMSE)at every model level. Error covariance matrices from the DrEnKF and the 100-member EnKF are also compared and show good agreement.
Hierarchical Bayes Ensemble Kalman Filtering
Tsyrulnikov, Michael
2015-01-01
Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...
Visualizing ensembles in structural biology.
Melvin, Ryan L; Salsbury, Freddie R
2016-06-01
Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures.
Improved customer choice predictions using ensemble methods
M.C. van Wezel (Michiel); R. Potharst (Rob)
2005-01-01
textabstractIn this paper various ensemble learning methods from machine learning and statistics are considered and applied to the customer choice modeling problem. The application of ensemble learning usually improves the prediction quality of flexible models like decision trees and thus leads to
Layered Ensemble Architecture for Time Series Forecasting.
Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin
2016-01-01
Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.
Ensemble methods for handwritten digit recognition
Hansen, Lars Kai; Liisberg, Christian; Salamon, P.
1992-01-01
. It is further shown that it is possible to estimate the ensemble performance as well as the learning curve on a medium-size database. In addition the authors present preliminary analysis of experiments on a large database and show that state-of-the-art performance can be obtained using the ensemble approach...
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-10-02
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Perception of ensemble statistics requires attention.
Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A
2017-02-01
To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics.
Popular Ensemble Methods: An Empirical Study
Maclin, R; 10.1613/jair.614
2011-01-01
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being exa...
Condensate fluctuations of interacting Bose gases within a microcanonical ensemble.
Wang, Jianhui; He, Jizhou; Ma, Yongli
2011-05-01
Based on counting statistics and Bogoliubov theory, we present a recurrence relation for the microcanonical partition function for a weakly interacting Bose gas with a finite number of particles in a cubic box. According to this microcanonical partition function, we calculate numerically the distribution function, condensate fraction, and condensate fluctuations for a finite and isolated Bose-Einstein condensate. For ideal and weakly interacting Bose gases, we compare the condensate fluctuations with those in the canonical ensemble. The present approach yields an accurate account of the condensate fluctuations for temperatures close to the critical region. We emphasize that the interactions between excited atoms turn out to be important for moderate temperatures.
Multi-wheat-model ensemble responses to interannual climatic variability
Ruane, A C; Hudson, N I; Asseng, S
2016-01-01
evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...... common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 ≤ 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long...
Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...
An ensemble formulation of PBL fluxes in a GCM
Sud, Y. C.; Smith, W. E.
1984-01-01
An ensemble approach is applied to Planetary Boundary Layer (PBL) calculations with the bulk Richardson number identified as the key parameter. An ensemble averaging calculation was carried out to rederive the bulk friction and heat transport coefficients for the mean condition. Two simulations are carried out and compared. Significant differences in PBL fluxes low level cloudiness, land surface roughness heights, and surface evaporation are noted between the modified and unmodified simulations. Modifications to the model were: (1) the relationship between actual and potential Effective Temperature (ET) to accord with Sud and Fennessy (1982); (2) maximum permissible instantaneous ET at any time is 1.5 mm per hr; (3) moisture distribution in low level cumulus convection to be consistent with no precipitation; (4) appearance of supersaturation clouds to be consistent with supersaturation condition at that level; (5) invoking a simple function for stomatal diffusion effect in the ET calculation.
Statistical ensembles of virialized halo matter density profiles
Carron, Julien
2013-01-01
We define and study statistical ensembles of matter density profiles describing spherically symmetric, virialized dark matter haloes of finite extent with a given mass and total gravitational potential energy. We provide an exact solution for the grand canonical partition functional, and show its equivalence to that of the microcanonical ensemble. We obtain analytically the mean profiles that correspond to an overwhelming majority of micro-states. All such profiles have an infinitely deep potential well, with the singular isothermal sphere arising in the infinite temperature limit. Systems with virial radius larger than gravitational radius exhibit a localization of a finite fraction of the energy in the very center. The universal logarithmic inner slope of unity of the NFW haloes is predicted at any mass and energy if an upper bound is set to the maximal depth of the potential well. In this case, the statistically favored mean profiles compare well to the NFW profiles. For very massive haloes the agreement b...
Weighted ensemble transform Kalman filter for image assimilation
Sebastien Beyou
2013-01-01
Full Text Available This study proposes an extension of the Weighted Ensemble Kalman filter (WEnKF proposed by Papadakis et al. (2010 for the assimilation of image observations. The main focus of this study is on a novel formulation of the Weighted filter with the Ensemble Transform Kalman filter (WETKF, incorporating directly as a measurement model a non-linear image reconstruction criterion. This technique has been compared to the original WEnKF on numerical and real world data of 2-D turbulence observed through the transport of a passive scalar. In particular, it has been applied for the reconstruction of oceanic surface current vorticity fields from sea surface temperature (SST satellite data. This latter technique enables a consistent recovery along time of oceanic surface currents and vorticity maps in presence of large missing data areas and strong noise.
Impact of hybrid GSI analysis using ETR ensembles
V S Prasad; C J Johny
2016-04-01
Performance of a hybrid assimilation system combining 3D Var based NGFS (NCMRWF Global ForecastSystem) with ETR (Ensemble Transform with Rescaling) based Global Ensemble Forecast (GEFS) ofresolution T-190L28 is investigated. The experiment is conducted for a period of one week in June 2013and forecast skills over different spatial domains are compared with respect to mean analysis state.Rainfall forecast is verified over Indian region against combined observations of IMD and NCMRWF.Hybrid assimilation produced marginal improvements in overall forecast skill in comparison with 3DVar. Hybrid experiment made significant improvement in wind forecasts in all the regions on verificationagainst mean analysis. The verification of forecasts with radiosonde observations also show improvementin wind forecasts with the hybrid assimilation. On verification against observations, hybrid experimentshows more improvement in temperature and wind forecasts at upper levels. Both hybrid and operational3D Var failed in prediction of extreme rainfall event over Uttarakhand on 17 June, 2013.
A comparison of model ensembles for attributing 2012 West African rainfall
Parker, Hannah R.; Lott, Fraser C.; Cornforth, Rosalind J.; Mitchell, Daniel M.; Sparrow, Sarah; Wallom, David
2017-01-01
In 2012, heavy rainfall resulted in flooding and devastating impacts across West Africa. With many people highly vulnerable to such events in this region, this study investigates whether anthropogenic climate change has influenced such heavy precipitation events. We use a probabilistic event attribution approach to assess the contribution of anthropogenic greenhouse gas emissions, by comparing the probability of such an event occurring in climate model simulations with all known climate forcings to those where natural forcings only are simulated. An ensemble of simulations from 10 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) is compared to two much larger ensembles of atmosphere-only simulations, from the Met Office model HadGEM3-A and from weather@home with a regional version of HadAM3P. These are used to assess whether the choice of model ensemble influences the attribution statement that can be made. Results show that anthropogenic greenhouse gas emissions have decreased the probability of high precipitation across most of the model ensembles. However, the magnitude and confidence intervals of the decrease depend on the ensemble used, with more certainty in the magnitude in the atmosphere-only model ensembles due to larger ensemble sizes from single models with more constrained simulations. Certainty is greatly decreased when considering a CMIP5 ensemble that can represent the relevant teleconnections due to a decrease in ensemble members. An increase in probability of high precipitation in HadGEM3-A using the observed trend in sea surface temperatures (SSTs) for natural simulations highlights the need to ensure that estimates of natural SSTs are consistent with observed trends in order for results to be robust. Further work is needed to establish how anthropogenic forcings are affecting the rainfall processes in these simulations in order to better understand the differences in the overall effect.
Calculations of canonical averages from the grand canonical ensemble.
Kosov, D S; Gelin, M F; Vdovin, A I
2008-02-01
Grand canonical and canonical ensembles become equivalent in the thermodynamic limit, but when the system size is finite the results obtained in the two ensembles deviate from each other. In many important cases, the canonical ensemble provides an appropriate physical description but it is often much easier to perform the calculations in the corresponding grand canonical ensemble. We present a method to compute averages in the canonical ensemble based on calculations of the expectation values in the grand canonical ensemble. The number of particles, which is fixed in the canonical ensemble, is not necessarily the same as the average number of particles in the grand canonical ensemble.
Domain walls, $Z(N)$ charge and $A_0$ condensate a canonical ensemble study
Borisenko, O A; Zinovjev, G M; Petrov, K V
1996-01-01
The deconfinement phase transition is studied in the ensemble canonical with respect to triality. Since this ensemble implies a projection to the zero triality sector of the theory we introduce a quantity which is insensitive to $Z(N_c)$ symmetry but can reveal a critical behaviour in the theory with dynamical quarks. Further, we argue that in the canonical ensemble description of full QCD there exist domains of different $Z(N_c)$ phases which are degenerate and possess normal physical properties. This contradicts the predictions of the grand canonical ensemble. We propose a new order parameter to test the realization of the discrete $Z(N_c)$ symmetry at finite temperature and calculate it for the case of $Z(2)$ gauge fields coupled to fundamental fermions.
Creation of the BMA ensemble for SST using a parallel processing technique
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
Quantum leakage of collective excitations of atomic ensemble induced by spatial motion
LI; Yng(李勇); YI; Su(易俗); YOU; Li(尤力); SUN; Changpu(孙昌璞)
2003-01-01
We generalize the conception of quantum leakage for the atomic collective excitation states. By making use of the atomic coherence state approach, we study the influence of the atomic spatial motion on the symmetric collective states of 2-level atomic ensemble due to inhomogeneous coupling. In the macroscopic limit, we analyze the quantum decoherence of the collective atomic state by calculating the quantum leakage for a very large ensemble at a finite temperature. Our investigations show that the fidelity of the atomic system will not be good in the case of atom number N →∞. Therefore, quantum leakage is an inevitable problem in using the atomic ensemble as a quantum information memory. The detailed calculations shed theoretical light on quantum processing using atomic ensemble collective qubit.
Pribram-Jones, Aurora
Warm dense matter (WDM) is a high energy phase between solids and plasmas, with characteristics of both. It is present in the centers of giant planets, within the earth's core, and on the path to ignition of inertial confinement fusion. The high temperatures and pressures of warm dense matter lead to complications in its simulation, as both classical and quantum effects must be included. One of the most successful simulation methods is density functional theory-molecular dynamics (DFT-MD). Despite great success in a diverse array of applications, DFT-MD remains computationally expensive and it neglects the explicit temperature dependence of electron-electron interactions known to exist within exact DFT. Finite-temperature density functional theory (FT DFT) is an extension of the wildly successful ground-state DFT formalism via thermal ensembles, broadening its quantum mechanical treatment of electrons to include systems at non-zero temperatures. Exact mathematical conditions have been used to predict the behavior of approximations in limiting conditions and to connect FT DFT to the ground-state theory. An introduction to FT DFT is given within the context of ensemble DFT and the larger field of DFT is discussed for context. Ensemble DFT is used to describe ensembles of ground-state and excited systems. Exact conditions in ensemble DFT and the performance of approximations depend on ensemble weights. Using an inversion method, exact Kohn-Sham ensemble potentials are found and compared to approximations. The symmetry eigenstate Hartree-exchange approximation is in good agreement with exact calculations because of its inclusion of an ensemble derivative discontinuity. Since ensemble weights in FT DFT are temperature-dependent Fermi weights, this insight may help develop approximations well-suited to both ground-state and FT DFT. A novel, highly efficient approach to free energy calculations, finite-temperature potential functional theory, is derived, which has the
Hybrid Data Assimilation without Ensemble Filtering
Todling, Ricardo; Akkraoui, Amal El
2014-01-01
The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
4DVAR by ensemble Kalman smoother
Mandel, Jan; Gratton, Serge
2013-01-01
We propose to use the ensemble Kalman smoother (EnKS) as linear least squares solver in the Gauss-Newton method for the large nonlinear least squares in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Further, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by^M the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS.
Derivation of Mayer Series from Canonical Ensemble
Wang, Xian-Zhi
2016-02-01
Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.
Ensemble Dynamics and Bred Vectors
Balci, Nusret; Restrepo, Juan M; Sell, George R
2011-01-01
We introduce the new concept of an EBV to assess the sensitivity of model outputs to changes in initial conditions for weather forecasting. The new algorithm, which we call the "Ensemble Bred Vector" or EBV, is based on collective dynamics in essential ways. By construction, the EBV algorithm produces one or more dominant vectors. We investigate the performance of EBV, comparing it to the BV algorithm as well as the finite-time Lyapunov Vectors. We give a theoretical justification to the observed fact that the vectors produced by BV, EBV, and the finite-time Lyapunov vectors are similar for small amplitudes. Numerical comparisons of BV and EBV for the 3-equation Lorenz model and for a forced, dissipative partial differential equation of Cahn-Hilliard type that arises in modeling the thermohaline circulation, demonstrate that the EBV yields a size-ordered description of the perturbation field, and is more robust than the BV in the higher nonlinear regime. The EBV yields insight into the fractal structure of th...
Downscaling a perturbed physics ensemble over the CORDEX Africa domain
Buontempo, Carlo; Williams, Karina; McSweeney, Carol; Jones, Richard; Mathison, Camilla; Wang, Chang
2014-05-01
We present here the methodology and the results of 5-member ensemble simulation of the climate of Africa for the period 1950-2100 using climate modelling system PRECIS over the CORDEX Africa domain. The boundary conditions for the regional model simulations were selected from a 17-member perturbed physics ensemble based on the HadCM3 global climate model (Murphy et al. 2007) following the methodology described in McSweeney et al 2012. Such an approach was selected in order to provide a good representation of the overall ensemble spread over a number of sub regions while at the same time avoiding members which have demonstrate particularly unrealistic characteristics in their baseline climate. In the simulations a special attention was given to the representation of some inland water bodies, such as lake Victoria, whose impact on the regional climate was believed to be significant thus allowing for the representation of some regional processes (e.g. land-lake breezes) that were not represented in the global models. In particular the SSTs of the lakes were corrected to better represent the local climatological values. The results suggest that RCM simulations improve the fit to observations of precipitation and temperature in most of the African sub-regions (e.g. North Africa, West Sahel). Also, the range of RCM projections is often different to those from the GCMs in these regions. We discuss the reasons for and links between these results and their implications for use in informing adaptation policy at regional level.
A 4D-Ensemble-Variational System for Data Assimilation and Ensemble Initialization
Bowler, Neill; Clayton, Adam; Jardak, Mohamed; Lee, Eunjoo; Jermey, Peter; Lorenc, Andrew; Piccolo, Chiara; Pring, Stephen; Wlasak, Marek; Barker, Dale; Inverarity, Gordon; Swinbank, Richard
2016-04-01
The Met Office has been developing a four-dimensional ensemble variational (4DEnVar) data assimilation system over the past four years. The 4DEnVar system is intended both as data assimilation system in its own right and also an improved means of initializing the Met Office Global and Regional Ensemble Prediction System (MOGREPS). The global MOGREPS ensemble has been initialized by running an ensemble of 4DEnVars (En-4DEnVar). The scalability and maintainability of ensemble data assimilation methods make them increasingly attractive, and 4DEnVar may be adopted in the context of the Met Office's LFRic project to redevelop the technical infrastructure to enable its Unified Model (MetUM) to be run efficiently on massively parallel supercomputers. This presentation will report on the results of the 4DEnVar development project, including experiments that have been run using ensemble sizes of up to 200 members.
Transition from Poisson to circular unitary ensemble
Vinayak; Akhilesh Pandey
2009-09-01
Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.
Ensemble Machine Learning Methods and Applications
Ma, Yunqian
2012-01-01
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...
Ensemble Learning for Free with Evolutionary Algorithms ?
Gagné, Christian; Schoenauer, Marc; Tomassini, Marco
2007-01-01
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art ...
Reversible Projective Measurement in Quantum Ensembles
Khitrin, Anatoly; Lee, Jae-Seung
2010-01-01
We present experimental NMR demonstration of a scheme of reversible projective measurement, which allows extracting information on outcomes and probabilities of a projective measurement in a non-destructive way, with a minimal net effect on the quantum state of an ensemble. The scheme uses reversible dynamics and weak measurement of the intermediate state. The experimental system is an ensemble of 133Cs (S = 7/2) nuclei in a liquid-crystalline matrix.
Ozone ensemble forecast with machine learning algorithms
Mallet, Vivien; Stoltz, Gilles; Mauricette, Boris
2009-01-01
International audience; We apply machine learning algorithms to perform sequential aggregation of ozone forecasts. The latter rely on a multimodel ensemble built for ozone forecasting with the modeling system Polyphemus. The ensemble simulations are obtained by changes in the physical parameterizations, the numerical schemes, and the input data to the models. The simulations are carried out for summer 2001 over western Europe in order to forecast ozone daily peaks and ozone hourly concentrati...
Cluster Ensemble-based Image Segmentation
Xiaoru Wang; Junping Du; Shuzhe Wu; Xu Li; Fu Li
2013-01-01
Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories ...
Calibrating ensemble reliability whilst preserving spatial structure
Jonathan Flowerdew
2014-03-01
Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output
Liu, Li; Xu, Yue-Ping
2017-04-01
Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.
Minimal redefinition of the OSV ensemble
Parvizi, S; Parvizi, Shahrokh; Tavanfar, Alireza
2005-01-01
In the interesting conjecture, Z_{BH}=|Z_{top}|^2, proposed by Ooguri, Strominger and Vafa (OSV), the black hole ensemble is a mixed ensemble, and resulting degeneracy of states as obtained from the ensemble inverse-Laplace integration, suffer from prefactors which do not respect the (relevant) electric-magnetic dualities. One idea to overcome this deficiency, as claimed recently, is imposing a nontrivial measure for the ensemble sum. We address this problem and upon a redefinition of the OSV ensemble whose variables are as numerous as the electric potentials, show that for restoring the symmetry no non-Euclidean measure is needful. In detail, we rewrite the OSV free energy as a function of new variables which are combinations of the electric-potentials and the black hole charges. Subsequently the Legendre transformation which bridges between the entropy and the black hole free energy in terms of these variables, points to a generalized ensemble. In this context we will consider all the cases of relevance: sm...
Level density for deformations of the Gaussian orthogonal ensemble
Bertuola, A C; Hussein, M S; Pato, M P; Sargeant, A J
2004-01-01
Formulas are derived for the average level density of deformed, or transition, Gaussian orthogonal random matrix ensembles. After some general considerations about Gaussian ensembles we derive formulas for the average level density for (i) the transition from the Gaussian orthogonal ensemble (GOE) to the Poisson ensemble and (ii) the transition from the GOE to $m$ GOEs.
Ju Hyoung Lee
2015-12-01
Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.
A MITgcm/DART ensemble analysis and prediction system with application to the Gulf of Mexico
Hoteit, Ibrahim
2013-09-01
This paper describes the development of an advanced ensemble Kalman filter (EnKF)-based ocean data assimilation system for prediction of the evolution of the loop current in the Gulf of Mexico (GoM). The system integrates the Data Assimilation Research Testbed (DART) assimilation package with the Massachusetts Institute of Technology ocean general circulation model (MITgcm). The MITgcm/DART system supports the assimilation of a wide range of ocean observations and uses an ensemble approach to solve the nonlinear assimilation problems. The GoM prediction system was implemented with an eddy-resolving 1/10th degree configuration of the MITgcm. Assimilation experiments were performed over a 6-month period between May and October during a strong loop current event in 1999. The model was sequentially constrained with weekly satellite sea surface temperature and altimetry data. Experiments results suggest that the ensemble-based assimilation system shows a high predictive skill in the GoM, with estimated ensemble spread mainly concentrated around the front of the loop current. Further analysis of the system estimates demonstrates that the ensemble assimilation accurately reproduces the observed features without imposing any negative impact on the dynamical balance of the system. Results from sensitivity experiments with respect to the ensemble filter parameters are also presented and discussed. © 2013 Elsevier B.V.
Park, Sangwook; Kim, Dong-Joon; Lee, Seung-Woo; Lee, Kie-Woung; Kim, Jongkhun; Song, Eun-Ji; Seo, Kyong-Hwan
2017-08-01
This article describes a three way inter-comparison of forecast skill on an extended medium-range time scale using the Korea Meteorological Administration (KMA) operational ensemble numerical weather prediction (NWP) systems (i.e., atmosphere-only global ensemble prediction system (EPSG) and ocean-atmosphere coupledEPSG) and KMA operational seasonal prediction system, the Global Seasonal forecast system version 5 (GloSea5). The main motivation is to investigate whether the ensemble NWP system can provide advantage over the existing seasonal prediction system for the extended medium-range forecast (30 days) even with putting extra resources in extended integration or coupling with ocean with NWP system. Two types of evaluation statistics are examined: the basic verification statistics - the anomaly correlation and RMSE of 500-hPa geopotential height and 1.5-meter surface temperature for the global and East Asia area, and the other is the Real-time Multivariate Madden and Julian Oscillation (MJO) indices (RMM1 and RMM2) - which is used to examine the MJO prediction skill. The MJO is regarded as a main source of forecast skill in the tropics linked to the mid-latitude weather on monthly time scale. Under limited number of experiment cases, the coupled NWP extends the forecast skill of the NWP by a few more days, and thereafter such forecast skill is overtaken by that of the seasonal prediction system. At present stage, it seems there is little gain from the coupled NWP even though more resources are put into it. Considering this, the best combination of numerical product guidance for operational forecasters for an extended medium-range is extension of the forecast lead time of the current ensemble NWP (EPSG) up to 20 days and use of the seasonal prediction system (GloSea5) forecast thereafter, though there exists a matter of consistency between the two systems.
The classicality and quantumness of a quantum ensemble
Zhu, Xuanmin; Wu, Shengjun; Liu, Quanhui
2010-01-01
In this paper, we investigate the classicality and quantumness of a quantum ensemble. We define a quantity called classicality to characterize how classical a quantum ensemble is. An ensemble of commuting states that can be manipulated classically has a unit classicality, while a general ensemble has a classicality less than 1. We also study how quantum an ensemble is by defining a related quantity called quantumness. We find that the classicality of an ensemble is closely related to how perfectly the ensemble can be cloned, and that the quantumness of an ensemble is essentially responsible for the security of quantum key distribution(QKD) protocols using that ensemble. Furthermore, we show that the quantumness of an ensemble used in a QKD protocol is exactly the attainable lower bound of the error rate in the sifted key.
Ensemble postprocessing for probabilistic quantitative precipitation forecasts
Bentzien, S.; Friederichs, P.
2012-12-01
Precipitation is one of the most difficult weather variables to predict in hydrometeorological applications. In order to assess the uncertainty inherent in deterministic numerical weather prediction (NWP), meteorological services around the globe develop ensemble prediction systems (EPS) based on high-resolution NWP systems. With non-hydrostatic model dynamics and without parameterization of deep moist convection, high-resolution NWP models are able to describe convective processes in more detail and provide more realistic mesoscale structures. However, precipitation forecasts are still affected by displacement errors, systematic biases and fast error growth on small scales. Probabilistic guidance can be achieved from an ensemble setup which accounts for model error and uncertainty of initial and boundary conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) provides such an ensemble system based on the German-focused limited-area model COSMO-DE. With a horizontal grid-spacing of 2.8 km, COSMO-DE is the convection-permitting high-resolution part of the operational model chain at DWD. The COSMO-DE-EPS consists of 20 realizations of COSMO-DE, driven by initial and boundary conditions derived from 4 global models and 5 perturbations of model physics. Ensemble systems like COSMO-DE-EPS are often limited with respect to ensemble size due to the immense computational costs. As a consequence, they can be biased and exhibit insufficient ensemble spread, and probabilistic forecasts may be not well calibrated. In this study, probabilistic quantitative precipitation forecasts are derived from COSMO-DE-EPS and evaluated at more than 1000 rain gauges located all over Germany. COSMO-DE-EPS is a frequently updated ensemble system, initialized 8 times a day. We use the time-lagged approach to inexpensively increase ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Moreover, we will show that statistical
Jahr Hegdahl, Trine; Steinsland, Ingelin; Merete Tallaksen, Lena; Engeland, Kolbjørn
2016-04-01
Probabilistic flood forecasting has an added value for decision making. The Norwegian flood forecasting service is based on a flood forecasting model that run for 145 basins. Covering all of Norway the basins differ in both size and hydrological regime. Currently the flood forecasting is based on deterministic meteorological forecasts, and an auto-regressive procedure is used to achieve probabilistic forecasts. An alternative approach is to use meteorological and hydrological ensemble forecasts to quantify the uncertainty in forecasted streamflow. The hydrological ensembles are based on forcing a hydrological model with meteorological ensemble forecasts of precipitation and temperature. However, the ensembles of precipitation are often biased and the spread is too small, especially for the shortest lead times, i.e. they are not calibrated. These properties will, to some extent, propagate to hydrological ensembles, that most likely will be uncalibrated as well. Pre- and post-processing methods are commonly used to obtain calibrated meteorological and hydrological ensembles respectively. Quantitative studies showing the effect of the combined processing of the meteorological (pre-processing) and the hydrological (post-processing) ensembles are however few. The aim of this study is to evaluate the influence of pre- and post-processing on the skill of streamflow predictions, and we will especially investigate if the forecasting skill depends on lead-time, basin size and hydrological regime. This aim is achieved by applying the 51 medium-range ensemble forecast of precipitation and temperature provided by the European Center of Medium-Range Weather Forecast (ECMWF). These ensembles are used as input to the operational Norwegian flood forecasting model, both raw and pre-processed. Precipitation ensembles are calibrated using a zero-adjusted gamma distribution. Temperature ensembles are calibrated using a Gaussian distribution and altitude corrected by a constant gradient
Irvine, Peter J.; Boucher, Olivier; Kravitz, Ben; Alterskjær, Kari; Cole, Jason N. S.; Ji, Duoying; Jones, Andy; Lunt, Daniel J.; Moore, John C.; Muri, Helene; Niemeier, Ulrike; Robock, Alan; Singh, Balwinder; Tilmes, Simone; Watanabe, Shingo; Yang, Shuting; Yoon, Jin-Ho
2014-07-01
Climate model studies of the consequences of solar geoengineering are central to evaluating whether such approaches may help to reduce the harmful impacts of global warming. In this study we compare the sunshade solar geoengineering response of a perturbed parameter ensemble (PPE) of the Hadley Centre Coupled Model version 3 (HadCM3) with a multimodel ensemble (MME) by analyzing the G1 experiment from the Geoengineering Model Intercomparison Project (GeoMIP). The PPE only perturbed a small number of parameters and shares a common structure with the unperturbed HadCM3 model, and so the additional weight the PPE adds to the robustness of the common climate response features in the MME is minor. However, analysis of the PPE indicates some of the factors that drive the spread within the MME. We isolate the role of global mean temperature biases for both ensembles and find that these biases have little effect on the ensemble spread in the hydrological response but do reduce the spread in surface air temperature response, particularly at high latitudes. We investigate the role of the preindustrial climatology and find that biases here are likely a key source of ensemble spread at the zonal and grid cell level. The role of vegetation, and its response to elevated CO2 concentrations through the CO2 physiological effect and changes in plant productivity, is also investigated and proves to have a substantial effect on the terrestrial hydrological response to solar geoengineering and to be a major source of variation within the GeoMIP ensemble.
Ensemble simulations for the RCP8.5-Scenario
Friedrich-Wilhelm Gerstengarbe
2015-04-01
Full Text Available The mean climatic development for Germany was investigated within the period 2031/60 in comparison to the situation in the observational period 1981/2010. The RCP8.5-Scenario of the IPCC was used because it reflects the actual CO2-emissions very well. On this basis the temperature trend for Germany was estimated using 21 GCM runs up to the year 2100. This temperature trend was the driving force for the statistical regional climate model STARS. 100 ensemble runs of the model STARS were compared with the scenario period and with the observational period. Temperature, precipitation, climatic water balance and some additional parameters were analyzed. One important result is the change in the distribution of precipitation in Germany during the year – decrease in summer, increase in winter. Finally the future climate development leads to a negative climatic water balance over the whole year.
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error...
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.
2016-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost. For example, the ensemble copula coupling (ECC) method rebuilds the multivariate aspect of the forecast from...... the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new...
Performance of a multi-RCM ensemble for South Eastern South America
Carril, A.F.; Menendez, C.G.; Salio, P. [Ciudad Universitaria, Ciudad Autonoma de Buenos Aires, Centro de Investigaciones del Mar y la Atmosfera (CIMA), CONICET-UBA, Buenos Aires (Argentina); Universidad de Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos (DCAO), FCEN, Buenos Aires (Argentina); UMI IFAECI/CNRS, Buenos Aires (Argentina); Remedio, A.R.C.; Jacob, D.; Pfeifer, S. [Max Planck Institute for Meteorology (MPI-M), Hamburg (Germany); Robledo, F.; Tencer, B. [Universidad de Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos (DCAO), FCEN, Buenos Aires (Argentina); Soerensson, A.; Zaninelli, P. [Ciudad Universitaria, Ciudad Autonoma de Buenos Aires, Centro de Investigaciones del Mar y la Atmosfera (CIMA), CONICET-UBA, Buenos Aires (Argentina); UMI IFAECI/CNRS, Buenos Aires (Argentina); Boulanger, J.P. [LOCEAN, UMR CNRS/IRD/UPMC, Paris (France); Castro, M. de; Sanchez, E. [Universidad de Castilla-La Mancha (UCLM), Toledo (Spain); Le Treut, H.; Li, L.Z.X. [Sciences de l' Environnement en Ile de France, Laboratoire de Meteorologie Dynamique (LMD), Institut-Pierre-Simon-Laplace et Ecole Doctorale, Paris (France); Penalba, O.; Rusticucci, M. [Universidad de Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos (DCAO), FCEN, Buenos Aires (Argentina); UMI IFAECI/CNRS, Buenos Aires (Argentina); Samuelsson, P. [Swedish Meteorological and Hydrological Institute (SMHI), Norrkoeping (Sweden)
2012-12-15
The ability of four regional climate models to reproduce the present-day South American climate is examined with emphasis on La Plata Basin. Models were integrated for the period 1991-2000 with initial and lateral boundary conditions from ERA-40 Reanalysis. The ensemble sea level pressure, maximum and minimum temperatures and precipitation are evaluated in terms of seasonal means and extreme indices based on a percentile approach. Dispersion among the individual models and uncertainties when comparing the ensemble mean with different climatologies are also discussed. The ensemble mean is warmer than the observations in South Eastern South America (SESA), especially for minimum winter temperatures with errors increasing in magnitude towards the tails of the distributions. The ensemble mean reproduces the broad spatial pattern of precipitation, but overestimates the convective precipitation in the tropics and the orographic precipitation along the Andes and over the Brazilian Highlands, and underestimates the precipitation near the monsoon core region. The models overestimate the number of wet days and underestimate the daily intensity of rainfall for both seasons suggesting a premature triggering of convection. The skill of models to simulate the intensity of convective precipitation in summer in SESA and the variability associated with heavy precipitation events (the upper quartile daily precipitation) is far from satisfactory. Owing to the sparseness of the observing network, ensemble and observations uncertainties in seasonal means are comparable for some regions and seasons. (orig.)
Multiscale macromolecular simulation: role of evolving ensembles.
Singharoy, A; Joshi, H; Ortoleva, P J
2012-10-22
Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin time step is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers.
Entanglement in a solid-state spin ensemble.
Simmons, Stephanie; Brown, Richard M; Riemann, Helge; Abrosimov, Nikolai V; Becker, Peter; Pohl, Hans-Joachim; Thewalt, Mike L W; Itoh, Kohei M; Morton, John J L
2011-02-03
Entanglement is the quintessential quantum phenomenon. It is a necessary ingredient in most emerging quantum technologies, including quantum repeaters, quantum information processing and the strongest forms of quantum cryptography. Spin ensembles, such as those used in liquid-state nuclear magnetic resonance, have been important for the development of quantum control methods. However, these demonstrations contain no entanglement and ultimately constitute classical simulations of quantum algorithms. Here we report the on-demand generation of entanglement between an ensemble of electron and nuclear spins in isotopically engineered, phosphorus-doped silicon. We combined high-field (3.4 T), low-temperature (2.9 K) electron spin resonance with hyperpolarization of the (31)P nuclear spin to obtain an initial state of sufficient purity to create a non-classical, inseparable state. The state was verified using density matrix tomography based on geometric phase gates, and had a fidelity of 98% relative to the ideal state at this field and temperature. The entanglement operation was performed simultaneously, with high fidelity, on 10(10) spin pairs; this fulfils one of the essential requirements for a silicon-based quantum information processor.
Control and Synchronization of Neuron Ensembles
Li, Jr-Shin; Ruths, Justin
2011-01-01
Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.
On large deviations for ensembles of distributions
Khrychev, D. A.
2013-11-01
The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each \\varepsilon>0 the nonempty set \\mathscr P_\\varepsilon of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set \\{\\mathscr P_\\varepsilon,\\,\\varepsilon>0\\}, hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.
Cavity cooling of an ensemble spin system.
Wood, Christopher J; Borneman, Troy W; Cory, David G
2014-02-07
We describe how sideband cooling techniques may be applied to large spin ensembles in magnetic resonance. Using the Tavis-Cummings model in the presence of a Rabi drive, we solve a Markovian master equation describing the joint spin-cavity dynamics to derive cooling rates as a function of ensemble size. Our calculations indicate that the coupled angular momentum subspaces of a spin ensemble containing roughly 10(11) electron spins may be polarized in a time many orders of magnitude shorter than the typical thermal relaxation time. The described techniques should permit efficient removal of entropy for spin-based quantum information processors and fast polarization of spin samples. The proposed application of a standard technique in quantum optics to magnetic resonance also serves to reinforce the connection between the two fields, which has recently begun to be explored in further detail due to the development of hybrid designs for manufacturing noise-resilient quantum devices.
Characteristic polynomials in real Ginibre ensembles
Akemann, G; Phillips, M J [Department of Mathematical Sciences and BURSt Research Centre, Brunel University West London, UB8 3PH Uxbridge (United Kingdom); Sommers, H-J [Fachbereich Physik, Universitaet Duisburg-Essen, 47048 Duisburg (Germany)], E-mail: Gernot.Akemann@brunel.ac.uk, E-mail: Michael.Phillips@brunel.ac.uk, E-mail: H.J.Sommers@uni-due.de
2009-01-09
We calculate the average of two characteristic polynomials for the real Ginibre ensemble of asymmetric random matrices, and its chiral counterpart. Considered as quadratic forms they determine a skew-symmetric kernel from which all complex eigenvalue correlations can be derived. Our results are obtained in a very simple fashion without going to an eigenvalue representation, and are completely new in the chiral case. They hold for Gaussian ensembles which are partly symmetric, with kernels given in terms of Hermite and Laguerre polynomials respectively, depending on an asymmetry parameter. This allows us to interpolate between the maximally asymmetric real Ginibre and the Gaussian orthogonal ensemble, as well as their chiral counterparts. (fast track communication)
Embedded random matrix ensembles in quantum physics
Kota, V K B
2014-01-01
Although used with increasing frequency in many branches of physics, random matrix ensembles are not always sufficiently specific to account for important features of the physical system at hand. One refinement which retains the basic stochastic approach but allows for such features consists in the use of embedded ensembles. The present text is an exhaustive introduction to and survey of this important field. Starting with an easy-to-read introduction to general random matrix theory, the text then develops the necessary concepts from the beginning, accompanying the reader to the frontiers of present-day research. With some notable exceptions, to date these ensembles have primarily been applied in nuclear spectroscopy. A characteristic example is the use of a random two-body interaction in the framework of the nuclear shell model. Yet, topics in atomic physics, mesoscopic physics, quantum information science and statistical mechanics of isolated finite quantum systems can also be addressed using these ensemb...
Total probabilities of ensemble runoff forecasts
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2017-04-01
Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes
Circular β ensembles, CMV representation, characteristic polynomials
SU ZhongGen
2009-01-01
In this note we first briefly review some recent progress in the study of the circular β ensemble on the unit circle, where 0 > 0 is a model parameter. In the special cases β = 1,2 and 4, this ensemble describes the joint probability density of eigenvalues of random orthogonal, unitary and sympletic matrices, respectively. For general β, Killip and Nenciu discovered a five-diagonal sparse matrix model, the CMV representation. This representation is new even in the case β = 2; and it has become a powerful tool for studying the circular β ensemble. We then give an elegant derivation for the moment identities of characteristic polynomials via the link with orthogonal polynomials on the unit circle.
Statistical ensembles and fragmentation of finite nuclei
Das, P.; Mallik, S.; Chaudhuri, G.
2017-09-01
Statistical models based on different ensembles are very commonly used to describe the nuclear multifragmentation reaction in heavy ion collisions at intermediate energies. Canonical model results are more appropriate for finite nuclei calculations while those obtained from the grand canonical ones are more easily calculable. A transformation relation has been worked out for converting results of finite nuclei from grand canonical to canonical and vice versa. The formula shows that, irrespective of the particle number fluctuation in the grand canonical ensemble, exact canonical results can be recovered for observables varying linearly or quadratically with the number of particles. This result is of great significance since the baryon and charge conservation constraints can make the exact canonical calculations extremely difficult in general. This concept developed in this work can be extended in future for transformation to ensembles where analytical solutions do not exist. The applicability of certain equations (isoscaling, etc.) in the regime of finite nuclei can also be tested using this transformation relation.
Broad range of 2050 warming from an observationally constrained large climate model ensemble
Rowlands, Daniel J.; Frame, David J.; Ackerley, Duncan; Aina, Tolu; Booth, Ben B. B.; Christensen, Carl; Collins, Matthew; Faull, Nicholas; Forest, Chris E.; Grandey, Benjamin S.; Gryspeerdt, Edward; Highwood, Eleanor J.; Ingram, William J.; Knight, Sylvia; Lopez, Ana; Massey, Neil; McNamara, Frances; Meinshausen, Nicolai; Piani, Claudio; Rosier, Suzanne M.; Sanderson, Benjamin M.; Smith, Leonard A.; Stone, Dáithí A.; Thurston, Milo; Yamazaki, Kuniko; Hiro Yamazaki, Y.; Allen, Myles R.
2012-04-01
Incomplete understanding of three aspects of the climate system--equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing--and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century. Explorations of these uncertainties have so far relied on scaling approaches, large ensembles of simplified climate models, or small ensembles of complex coupled atmosphere-ocean general circulation models which under-represent uncertainties in key climate system properties derived from independent sources. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere-ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4-3K by 2050, relative to 1961-1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report, but extends towards larger warming than observed in ensembles-of-opportunity typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range `no mitigation' scenario for greenhouse-gas emissions.
Taylor, Patrick C.; Baker, Noel C.
2015-01-01
Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.
Ensemble Enabled Weighted PageRank
Luo, Dongsheng; Hu, Renjun; Duan, Liang; Ma, Shuai
2016-01-01
This paper describes our solution for WSDM Cup 2016. Ranking the query independent importance of scholarly articles is a critical and challenging task, due to the heterogeneity and dynamism of entities involved. Our approach is called Ensemble enabled Weighted PageRank (EWPR). To do this, we first propose Time-Weighted PageRank that extends PageRank by introducing a time decaying factor. We then develop an ensemble method to assemble the authorities of the heterogeneous entities involved in scholarly articles. We finally propose to use external data sources to further improve the ranking accuracy. Our experimental study shows that our EWPR is a good choice for ranking scholarly articles.
Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project
Wallick, Michael N.; Mittman, David S.; Shams, Khawaja, S.; Bachmann, Andrew G.; Ludowise, Melissa
2013-01-01
This software simplifies the process of having to set up an Eclipse IDE programming environment for the members of the cross-NASA center project, Ensemble. It achieves this by assembling all the necessary add-ons and custom tools/preferences. This software is unique in that it allows developers in the Ensemble Project (approximately 20 to 40 at any time) across multiple NASA centers to set up a development environment almost instantly and work on Ensemble software. The software automatically has the source code repositories and other vital information and settings included. The Eclipse IDE is an open-source development framework. The NASA (Ensemble-specific) version of the software includes Ensemble-specific plug-ins as well as settings for the Ensemble project. This software saves developers the time and hassle of setting up a programming environment, making sure that everything is set up in the correct manner for Ensemble development. Existing software (i.e., standard Eclipse) requires an intensive setup process that is both time-consuming and error prone. This software is built once by a single user and tested, allowing other developers to simply download and use the software
Electron spin contrast of Purcell-enhanced nitrogen-vacancy ensembles in nanodiamonds
Bogdanov, S.; Shalaginov, M. Y.; Akimov, A.; Lagutchev, A. S.; Kapitanova, P.; Liu, J.; Woods, D.; Ferrera, M.; Belov, P.; Irudayaraj, J.; Boltasseva, A.; Shalaev, V. M.
2017-07-01
Nitrogen-vacancy centers in diamond allow for coherent spin-state manipulation at room temperature, which could bring dramatic advances to nanoscale sensing and quantum information technology. We introduce a method for the optical measurement of the spin contrast in dense nitrogen-vacancy (NV) ensembles. This method brings insight into the interplay between the spin contrast and fluorescence lifetime. We show that for improving the spin readout sensitivity in NV ensembles, one should aim at modifying the far-field radiation pattern rather than enhancing the emission rate.
Zanchettin, D.; Bothe, O.; Rubino, A.; Jungclaus, J. H.
2016-08-01
We assess internally-generated climate variability expressed by a multi-model ensemble of unperturbed climate simulations. We focus on basin-scale annual-average sea surface temperatures (SSTs) from twenty multicentennial pre-industrial control simulations contributing to the fifth phase of the Coupled Model Intercomparison Project. Ensemble spatial patterns of regional modes of variability and ensemble (cross-)wavelet-based phase-frequency diagrams of corresponding paired indices summarize the ensemble characteristics of inter-basin and regional-to-global SST interactions on a broad range of timescales. Results reveal that tropical and North Pacific SSTs are a source of simulated interannual global SST variability. The North Atlantic-average SST fluctuates in rough co-phase with the global-average SST on multidecadal timescales, which makes it difficult to discern the Atlantic Multidecadal Variability (AMV) signal from the global signal. The two leading modes of tropical and North Pacific SST variability converge towards co-phase in the multi-model ensemble, indicating that the Pacific Decadal Oscillation (PDO) results from a combination of tropical and extra-tropical processes. No robust inter- or multi-decadal inter-basin SST interaction arises from our ensemble analysis between the Pacific and Atlantic oceans, though specific phase-locked fluctuations occur between Pacific and Atlantic modes of SST variability in individual simulations and/or periods within individual simulations. The multidecadal modulation of PDO by the AMV identified in observations appears to be a recurrent but not typical feature of ensemble-simulated internal variability. Understanding the mechanism(s) and circumstances favoring such inter-basin SST phasing and related uncertainties in their simulated representation could help constraining uncertainty in decadal climate predictions.
Total probabilities of ensemble runoff forecasts
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-04-01
Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative
Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability
Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos
2016-01-01
We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.
Confinement in a correlated Instanton-Dyon Ensemble
Lopez-Ruiz, Miguel Angel; Jiang, Yin; Liao, Jinfeng
2017-07-01
Confinement is a remarkable nonperturbative phenomena emerging from QCD and QCD-like theories. A theoretical understanding of these transitions and their interrelations is of fundamental importance. While it is widely perceived that their dynamics arises from nontrivial topological configurations in Yang-Mills theories, a concrete and sophisticated realization of such idea is an outstanding challenge. We report significant progress along this direction by the construction of a new framework based on correlated ensemble of instanton-dyons, namely the constituents of the finite-temperature instantons with non-trivial holonomy. We present a comprehensive numerical study of confinement properties in SU(2) Yang-Mills theory at finite temperature, obtaining important observables such as the effective holonomy potential, the static quark potentials from Polyakov loop correlators as well as spatial Wilson loops, among others.
Hybrid quantum circuit with a superconducting qubit coupled to an electron spin ensemble
Kubo, Yuimaru; Grezes, Cecile; Vion, Denis; Esteve, Daniel; Bertet, Patrice [Quantronics Group, SPEC (CNRS URA 2464), CEA-Saclay, 91191 Gif-sur-Yvette (France); Diniz, Igor; Auffeves, Alexia [Institut Neel, CNRS, BP 166, 38042 Grenoble (France); Isoya, Jun-ichi [Research Center for Knowledge Communities, University of Tsukuba, 305-8550 Tsukuba (Japan); Jacques, Vincent; Dreau, Anais; Roch, Jean-Francois [LPQM (CNRS, UMR 8537), Ecole Normale Superieure de Cachan, 94235 Cachan (France)
2013-07-01
We report the experimental realization of a hybrid quantum circuit combining a superconducting qubit and an ensemble of electronic spins. The qubit, of the transmon type, is coherently coupled to the spin ensemble consisting of nitrogen-vacancy (NV) centers in a diamond crystal via a frequency-tunable superconducting resonator acting as a quantum bus. Using this circuit, we prepare arbitrary superpositions of the qubit states that we store into collective excitations of the spin ensemble and retrieve back into the qubit. We also report a new method for detecting the magnetic resonance of electronic spins at low temperature with a qubit using the hybrid quantum circuit, as well as our recent progress on spin echo experiments.
Multi-Model Long-Range Ensemble Forecast for Decision Support in Hydroelectric Operations
Kunkel, M. L.; Parkinson, S.; Blestrud, D.; Holbrook, V. P.
2014-12-01
Idaho Power Company (IPC) is a hydroelectric based utility serving over a million customers in southern Idaho and eastern Oregon. Hydropower makes up ~50% of our power generation and accurate predictions of streamflow and precipitation drive our long-term planning and decision support for operations. We investigate the use of a multi-model ensemble approach for mid and long-range streamflow and precipitation forecasts throughout the Snake River Basin. Forecast are prepared using an Idaho Power developed ensemble forecasting technique for 89 locations throughout the Snake River Basin for periods of 3 to 18 months in advance. A series of multivariable linear regression, multivariable non-linear regression and multivariable Kalman filter techniques are combined in an ensemble forecast based upon two data types, historical data (streamflow, precipitation, climate indices [i.e. PDO, ENSO, AO, etc…]) and single value decomposition derived values based upon atmospheric heights and sea surface temperatures.
An Ensemble Approach for Expanding Queries
2012-11-01
vincristine; thalidomide; painful; cisplatin; oxaliplatin; charcot -marie-tooth disease ; drugs; neuropathy Ensemble expansion child of, asthma, kids...system disorders; peripheral nerve diseases ; peripheral neuropathies; peripheral nervous system disorder; peripheral nervous system disease ...peripheral nerve disease ; peripheral nerve disorders, peripheral nerve disorder Relation expansion offspring, child of, of child, child find
NYYD Ensemble ja Riho Sibul / Anneli Remme
Remme, Anneli, 1968-
2001-01-01
Gavin Bryarsi teos "Jesus' Blood Never Failed Me Yet" NYYD Ensemble'i ja Riho Sibula esituses 27. detsembril Pauluse kirikus Tartus ja 28. detsembril Rootsi- Mihkli kirikus Tallinnas. Kaastegevad Tartu Ülikooli Kammerkoor (Tartus) ja kammerkoor Voces Musicales (Tallinnas). Kunstiline juht Olari Elts
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
NYYD Ensemble ja Riho Sibul / Anneli Remme
Remme, Anneli, 1968-
2001-01-01
Gavin Bryarsi teos "Jesus' Blood Never Failed Me Yet" NYYD Ensemble'i ja Riho Sibula esituses 27. detsembril Pauluse kirikus Tartus ja 28. detsembril Rootsi- Mihkli kirikus Tallinnas. Kaastegevad Tartu Ülikooli Kammerkoor (Tartus) ja kammerkoor Voces Musicales (Tallinnas). Kunstiline juht Olari Elts
Eigenstate Gibbs ensemble in integrable quantum systems
Nandy, Sourav; Sen, Arnab; Das, Arnab; Dhar, Abhishek
2016-12-01
The eigenstate thermalization hypothesis conjectures that for a thermodynamically large system in one of its energy eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of relevant conserved quantities. In a chaotic quantum system, only the energy is expected to play that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and therefore the reduced density matrix requires specification of all the corresponding parameters (generalized Gibbs ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e., requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then shown to be described by appropriately truncated generalized Gibbs ensembles. We further show that the presence of these rare eigenstates differentiates the model from the chaotic case and leads to the system being described by a generalized Gibbs ensemble at long time under a unitary dynamics following a sudden quench, even when the initial state is a typical (Gibbs-like) eigenstate of the prequench Hamiltonian.
Locally Accessible Information from Multipartite Ensembles
SONG Wei
2009-01-01
We present a universal Holevo-like upper bound on the locally accessible information for arbitrary multipartite ensembles.This bound allows us to analyze the indistinguishability of a set of orthogonal states under local operations and classical communication.We also derive the upper bound for the capacity of distributed dense coding with multipartite senders and multipartite receivers.
Canonical Ensemble Model for Black Hole Radiation
Jingyi Zhang
2014-09-01
In this paper, a canonical ensemble model for the black hole quantum tunnelling radiation is introduced. In this model the probability distribution function corresponding to the emission shell is calculated to second order. The formula of pressure and internal energy of the thermal system is modified, and the fundamental equation of thermodynamics is also discussed.
A Hierarchical Bayes Ensemble Kalman Filter
Tsyrulnikov, Michael; Rakitko, Alexander
2017-01-01
A new ensemble filter that allows for the uncertainty in the prior distribution is proposed and tested. The filter relies on the conditional Gaussian distribution of the state given the model-error and predictability-error covariance matrices. The latter are treated as random matrices and updated in a hierarchical Bayes scheme along with the state. The (hyper)prior distribution of the covariance matrices is assumed to be inverse Wishart. The new Hierarchical Bayes Ensemble Filter (HBEF) assimilates ensemble members as generalized observations and allows ordinary observations to influence the covariances. The actual probability distribution of the ensemble members is allowed to be different from the true one. An approximation that leads to a practicable analysis algorithm is proposed. The new filter is studied in numerical experiments with a doubly stochastic one-variable model of "truth". The model permits the assessment of the variance of the truth and the true filtering error variance at each time instance. The HBEF is shown to outperform the EnKF and the HEnKF by Myrseth and Omre (2010) in a wide range of filtering regimes in terms of performance of its primary and secondary filters.
Statistical theory of hierarchical avalanche ensemble
Olemskoi, Alexander I.
1999-01-01
The statistical ensemble of avalanche intensities is considered to investigate diffusion in ultrametric space of hierarchically subordinated avalanches. The stationary intensity distribution and the steady-state current are obtained. The critical avalanche intensity needed to initiate the global avalanche formation is calculated depending on noise intensity. The large time asymptotic for the probability of the global avalanche appearance is derived.
Marking up lattice QCD configurations and ensembles
Coddington, P; Maynard, C M; Pleiter, D; Yoshié, T
2007-01-01
QCDml is an XML-based markup language designed for sharing QCD configurations and ensembles world-wide via the International Lattice Data Grid (ILDG). Based on the latest release, we present key ingredients of the QCDml in order to provide some starting points for colleagues in this community to markup valuable configurations and submit them to the ILDG.
Zheng, Xiao-Tong; Hui, Chang; Yeh, Sang-Wook
2017-08-01
El Niño-Southern Oscillation (ENSO) is the dominant mode of variability in the coupled ocean-atmospheric system. Future projections of ENSO change under global warming are highly uncertain among models. In this study, the effect of internal variability on ENSO amplitude change in future climate projections is investigated based on a 40-member ensemble from the Community Earth System Model Large Ensemble (CESM-LE) project. A large uncertainty is identified among ensemble members due to internal variability. The inter-member diversity is associated with a zonal dipole pattern of sea surface temperature (SST) change in the mean along the equator, which is similar to the second empirical orthogonal function (EOF) mode of tropical Pacific decadal variability (TPDV) in the unforced control simulation. The uncertainty in CESM-LE is comparable in magnitude to that among models of the Coupled Model Intercomparison Project phase 5 (CMIP5), suggesting the contribution of internal variability to the intermodel uncertainty in ENSO amplitude change. However, the causations between changes in ENSO amplitude and the mean state are distinct between CESM-LE and CMIP5 ensemble. The CESM-LE results indicate that a large ensemble of 15 members is needed to separate the relative contributions to ENSO amplitude change over the twenty-first century between forced response and internal variability.
Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B
2011-07-01
Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.
Pelosi, Anna; Medina Gonzalez, Hanoi; Villani, Paolo; D'Urso, Guido; Battista Chirico, Giovanni
2016-04-01
This study explores the performance of an adaptive procedure for correcting the ensemble numerical weather outputs applied to the probabilistic forecast of reference evapotranspiration (ETo). This procedure is proposed as an effective forecast correction method when the available dataset is not large enough for the calibration of statistical batch procedures. The numerical weather prediction outputs are those provided by COSMO-LEPS, an ensemble-based Limited Area Model, with 16 members and 7.5 km spatial resolution, with forecast lead-time up to 5 days. ETo forecasts are computed according to the FAO Penman-Monteith (FAO-PM) equation, which requires data of five weather variables: air temperature, relative humidity, solar radiation and wind speed. The performance of the proposed procedure is evaluated at eighteen monitoring stations, located in Campania region (Southern Italy), with two alternative strategies: i) correction applied to the raw ensemble forecasts of the five weather variables prior applying the FAO-PM equation; ii) correction applied to the ensemble output of the ETo forecasts obtained with FAO-PM equation after using the raw ensemble weather forecasts as input. In both cases the suggested post-processing procedure was able to significantly increase the accuracy and reduce the uncertainty of the ETo forecasts.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
Data assimilation with the ensemble Kalman filter in a numerical model of the North Sea
Ponsar, Stéphanie; Luyten, Patrick; Dulière, Valérie
2016-08-01
Coastal management and maritime safety strongly rely on accurate representations of the sea state. Both dynamical models and observations provide abundant pieces of information. However, none of them provides the complete picture. The assimilation of observations into models is one way to improve our knowledge of the ocean state. Its application in coastal models remains challenging because of the wide range of temporal and spatial variabilities of the processes involved. This study investigates the assimilation of temperature profiles with the ensemble Kalman filter in 3-D North Sea simulations. The model error is represented by the standard deviation of an ensemble of model states. Parameters' values for the ensemble generation are first computed from the misfit between the data and the model results without assimilation. Then, two square root algorithms are applied to assimilate the data. The impact of data assimilation on the simulated temperature is assessed. Results show that the ensemble Kalman filter is adequate for improving temperature forecasts in coastal areas, under adequate model error specification.
A Theoretical Analysis of Why Hybrid Ensembles Work
Kuo-Wei Hsu
2017-01-01
Full Text Available Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
Global Ensemble Forecast System (GEFS) [2.5 Deg.
National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...
An educational model for ensemble streamflow simulation and uncertainty analysis
AghaKouchak, A; Nakhjiri, N; Habib, E
2013-01-01
...) are interconnected. The educational toolbox includes a MATLAB Graphical User Interface (GUI) and an ensemble simulation scheme that can be used for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity...
Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics
Zhaoxia PU; Joshua HACKER
2009-01-01
This study examines the effectiveness of ensemble Kalman filters in data assimilation with the strongly nonlinear dynamics of the Lorenz-63 model, and in particular their use in predicting the regime transition that occurs when the model jumps from one basin of attraction to the other. Four configurations of the ensemble-based Kalman filtering data assimilation techniques, including the ensemble Kalman filter, ensemble adjustment Kalman filter, ensemble square root filter and ensemble transform Kalman filter, are evaluated with their ability in predicting the regime transition (also called phase transition) and also are compared in terms of their sensitivity to both observational and sampling errors. The sensitivity of each ensemble-based filter to the size of the ensemble is also examined.
Space Applications for Ensemble Detection and Analysis Project
National Aeronautics and Space Administration — Ensemble Detection is both a measurement technique and analysis tool. Like a prism that separates light into spectral bands, an ensemble detector mixes a signal with...
Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing
Pinson, Pierre; Madsen, Henrik
methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. The relevance and benefits of employing a newly developed orthogonal fitting method for the power curve model over the traditional least-squares one are discussed...... predictive distributions. Such a methodology has the benefit of yielding predictive distributions that are of increased reliability (in a probabilistic sense) in comparison with the raw ensemble forecasts, while taking advantage of their high resolution....... of probabilistic forecasts, the resolution of which may be maximized by using meteorological ensemble predictions as input. The paper concentrates on the test case of the Horns Rev wind farm over a period of approximately one year, in order to describe, apply and discuss a complete ensemble-based forecasting...
Theory and Practice of Phase-aware Ensemble Forecasting
Schulte, J. A.; Georgas, N.
2016-12-01
The timing of events represents a source of uncertainty in ensemble forecasting that can produce misleading ensemble statistics. A general theory is presented to overcome drawbacks of traditional ensemble forecasting statistics that perform poorly in the presence of timing disagreements among ensemble members. It was shown, in particular, that ensemble forecasts containing substantial uncertainty in timing can produce non-trivial higher-order statistical moments, rendering the ensemble mean inappropriate as a best available estimate of the future state of the forecast parameter in question. A set of theoretical experiments showed that the existence of large timing differences among ensemble members can produce negative ensemble skewness even when the ensemble members are sinusoids whose amplitudes are drawn from a normal distribution: Consistently, the ensemble mean will tend to fall on the left tail of the normal distribution representing the originally sampled amplitudes, rather than at the mean or median. To remedy the left-tail placement problem of the ensemble mean, a new generally applicable ensemble statistic - the phase-aware ensemble mean - is proposed that is more robust against ensemble skewness resulting from timing spread. The computation of the phase-aware mean involves the transformation of all ensemble members to wavelet space and the subsequent inverse wavelet transformation of the product of the ensemble mean wavelet phase and modulus back to the time domain. The new methods were applied to storm surge reforecasts for Hurricane Irene and Sandy at 8 stations located around the New York City metropolitan area. The phase-aware ensemble mean was found to perform better at detecting the magnitude of events compared to the traditional ensemble mean, consistent with the results from theoretical experiments. The ensemble mean, moreover, was found to be consistently located on the left tail of distributions representing future peak storm surge outcomes. A
Quantum canonical ensemble: A projection operator approach
Magnus, Wim; Lemmens, Lucien; Brosens, Fons
2017-09-01
Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.
Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast
Jinyin Ye; Yuehong Shao; Zhijia Li
2016-01-01
TIGGE (THORPEX International Grand Global Ensemble) was a major part of the THORPEX (Observing System Research and Predictability Experiment). It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability fo...
Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).
Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D
2010-07-01
Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.
Peishu, Zong; Jianping, Tang; Shuyu, Wang; Lingyun, Xie; Jianwei, Yu; Yunqian, Zhu; Xiaorui, Niu; Chao, Li
2017-08-01
The parameterization of physical processes is one of the critical elements to properly simulate the regional climate over eastern China. It is essential to conduct detailed analyses on the effect of physical parameterization schemes on regional climate simulation, to provide more reliable regional climate change information. In this paper, we evaluate the 25-year (1983-2007) summer monsoon climate characteristics of precipitation and surface air temperature by using the regional spectral model (RSM) with different physical schemes. The ensemble results using the reliability ensemble averaging (REA) method are also assessed. The result shows that the RSM model has the capacity to reproduce the spatial patterns, the variations, and the temporal tendency of surface air temperature and precipitation over eastern China. And it tends to predict better climatology characteristics over the Yangtze River basin and the South China. The impact of different physical schemes on RSM simulations is also investigated. Generally, the CLD3 cloud water prediction scheme tends to produce larger precipitation because of its overestimation of the low-level moisture. The systematic biases derived from the KF2 cumulus scheme are larger than those from the RAS scheme. The scale-selective bias correction (SSBC) method improves the simulation of the temporal and spatial characteristics of surface air temperature and precipitation and advances the circulation simulation capacity. The REA ensemble results show significant improvement in simulating temperature and precipitation distribution, which have much higher correlation coefficient and lower root mean square error. The REA result of selected experiments is better than that of nonselected experiments, indicating the necessity of choosing better ensemble samples for ensemble.
Data assimilation in integrated hydrological modeling using ensemble Kalman filtering
Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh
2015-01-01
Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members...
Exploring and Listening to Chinese Classical Ensembles in General Music
Zhang, Wenzhuo
2017-01-01
Music diversity is valued in theory, but the extent to which it is efficiently presented in music class remains limited. Within this article, I aim to bridge this gap by introducing four genres of Chinese classical ensembles--Qin and Xiao duets, Jiang Nan bamboo and silk ensembles, Cantonese ensembles, and contemporary Chinese orchestras--into the…
Data assimilation in integrated hydrological modeling using ensemble Kalman filtering
Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh;
2015-01-01
Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members...
Optical hyperpolarization of 13C nuclear spins in nanodiamond ensembles
Chen, Q.; Schwarz, I.; Jelezko, F.; Retzker, A.; Plenio, M. B.
2015-11-01
Dynamical nuclear polarization holds the key for orders of magnitude enhancements of nuclear magnetic resonance signals which, in turn, would enable a wide range of novel applications in biomedical sciences. However, current implementations of DNP require cryogenic temperatures and long times for achieving high polarization. Here we propose and analyze in detail protocols that can achieve rapid hyperpolarization of 13C nuclear spins in randomly oriented ensembles of nanodiamonds at room temperature. Our protocols exploit a combination of optical polarization of electron spins in nitrogen-vacancy centers and the transfer of this polarization to 13C nuclei by means of microwave control to overcome the severe challenges that are posed by the random orientation of the nanodiamonds and their nitrogen-vacancy centers. Specifically, these random orientations result in exceedingly large energy variations of the electron spin levels that render the polarization and coherent control of the nitrogen-vacancy center electron spins as well as the control of their coherent interaction with the surrounding 13C nuclear spins highly inefficient. We address these challenges by a combination of an off-resonant microwave double resonance scheme in conjunction with a realization of the integrated solid effect which, together with adiabatic rotations of external magnetic fields or rotations of nanodiamonds, leads to a protocol that achieves high levels of hyperpolarization of the entire nuclear-spin bath in a randomly oriented ensemble of nanodiamonds even at room temperature. This hyperpolarization together with the long nuclear-spin polarization lifetimes in nanodiamonds and the relatively high density of 13C nuclei has the potential to result in a major signal enhancement in 13C nuclear magnetic resonance imaging and suggests functionalized and hyperpolarized nanodiamonds as a unique probe for molecular imaging both in vitro and in vivo.
Ensemble Theory for Stealthy Hyperuniform Disordered Ground States
S. Torquato
2015-05-01
Full Text Available It has been shown numerically that systems of particles interacting with isotropic “stealthy” bounded long-ranged pair potentials (similar to Friedel oscillations have classical ground states that are (counterintuitively disordered, hyperuniform, and highly degenerate. Disordered hyperuniform systems have received attention recently because they are distinguishable exotic states of matter poised between a crystal and liquid that are endowed with novel thermodynamic and physical properties. The task of formulating an ensemble theory that yields analytical predictions for the structural characteristics and other properties of stealthy degenerate ground states in d-dimensional Euclidean space R^{d} is highly nontrivial because the dimensionality of the configuration space depends on the number density ρ and there is a multitude of ways of sampling the ground-state manifold, each with its own probability measure for finding a particular ground-state configuration. The purpose of this paper is to take some initial steps in this direction. Specifically, we derive general exact relations for thermodynamic properties (energy, pressure, and isothermal compressibility that apply to any ground-state ensemble as a function of ρ in any d, and we show how disordered degenerate ground states arise as part of the ground-state manifold. We also derive exact integral conditions that both the pair correlation function g_{2}(r and structure factor S(k must obey for any d. We then specialize our results to the canonical ensemble (in the zero-temperature limit by exploiting an ansatz that stealthy states behave remarkably like “pseudo”-equilibrium hard-sphere systems in Fourier space. Our theoretical predictions for g_{2}(r and S(k are in excellent agreement with computer simulations across the first three space dimensions. These results are used to obtain order metrics, local number variance, and nearest-neighbor functions across dimensions. We also derive
Ensemble Theory for Stealthy Hyperuniform Disordered Ground States
Torquato, S.; Zhang, G.; Stillinger, F. H.
2015-04-01
It has been shown numerically that systems of particles interacting with isotropic "stealthy" bounded long-ranged pair potentials (similar to Friedel oscillations) have classical ground states that are (counterintuitively) disordered, hyperuniform, and highly degenerate. Disordered hyperuniform systems have received attention recently because they are distinguishable exotic states of matter poised between a crystal and liquid that are endowed with novel thermodynamic and physical properties. The task of formulating an ensemble theory that yields analytical predictions for the structural characteristics and other properties of stealthy degenerate ground states in d -dimensional Euclidean space Rd is highly nontrivial because the dimensionality of the configuration space depends on the number density ρ and there is a multitude of ways of sampling the ground-state manifold, each with its own probability measure for finding a particular ground-state configuration. The purpose of this paper is to take some initial steps in this direction. Specifically, we derive general exact relations for thermodynamic properties (energy, pressure, and isothermal compressibility) that apply to any ground-state ensemble as a function of ρ in any d , and we show how disordered degenerate ground states arise as part of the ground-state manifold. We also derive exact integral conditions that both the pair correlation function g2(r ) and structure factor S (k ) must obey for any d . We then specialize our results to the canonical ensemble (in the zero-temperature limit) by exploiting an ansatz that stealthy states behave remarkably like "pseudo"-equilibrium hard-sphere systems in Fourier space. Our theoretical predictions for g2(r ) and S (k ) are in excellent agreement with computer simulations across the first three space dimensions. These results are used to obtain order metrics, local number variance, and nearest-neighbor functions across dimensions. We also derive accurate analytical
Heinrich, Georg; Gobiet, Andreas; Mendlik, Thomas
2014-01-01
This study aims at sharpening the existing knowledge of expected seasonal mean climate change and its uncertainty over Europe for the two key climate variables air temperature and precipitation amount until the mid-twentyfirst century. For this purpose, we assess and compensate the global climate model (GCM) sampling bias of the ENSEMBLES regional climate model (RCM) projections by combining them with the full set of the CMIP3 GCM ensemble. We first apply a cross-validation in order to assess the skill of different statistical data reconstruction methods in reproducing ensemble mean and standard deviation. We then select the most appropriate reconstruction method in order to fill the missing values of the ENSEMBLES simulation matrix and further extend the matrix by all available CMIP3 GCM simulations forced by the A1B emission scenario. Cross-validation identifies a randomized scaling approach as superior in reconstructing the ensemble spread. Errors in ensemble mean and standard deviation are mostly less than 0.1 K and 1.0 % for air temperature and precipitation amount, respectively. Reconstruction of the missing values reveals that expected seasonal mean climate change of the ENSEMBLES RCM projections is not significantly biased and that the associated uncertainty is not underestimated due to sampling of only a few driving GCMs. In contrast, the spread of the extended simulation matrix is partly significantly lower, sharpening our knowledge about future climate change over Europe by reducing uncertainty in some regions. Furthermore, this study gives substantial weight to recent climate change impact studies based on the ENSEMBLES projections, since it confirms the robustness of the climate forcing of these studies concerning GCM sampling.
The role of ensemble post-processing for modeling the ensemble tail
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol
Demonstrating the value of larger ensembles in forecasting physical systems
Reason L. Machete
2016-12-01
Full Text Available Ensemble simulation propagates a collection of initial states forward in time in a Monte Carlo fashion. Depending on the fidelity of the model and the properties of the initial ensemble, the goal of ensemble simulation can range from merely quantifying variations in the sensitivity of the model all the way to providing actionable probability forecasts of the future. Whatever the goal is, success depends on the properties of the ensemble, and there is a longstanding discussion in meteorology as to the size of initial condition ensemble most appropriate for Numerical Weather Prediction. In terms of resource allocation: how is one to divide finite computing resources between model complexity, ensemble size, data assimilation and other components of the forecast system. One wishes to avoid undersampling information available from the model's dynamics, yet one also wishes to use the highest fidelity model available. Arguably, a higher fidelity model can better exploit a larger ensemble; nevertheless it is often suggested that a relatively small ensemble, say ~16 members, is sufficient and that larger ensembles are not an effective investment of resources. This claim is shown to be dubious when the goal is probabilistic forecasting, even in settings where the forecast model is informative but imperfect. Probability forecasts for a ‘simple’ physical system are evaluated at different lead times; ensembles of up to 256 members are considered. The pure density estimation context (where ensemble members are drawn from the same underlying distribution as the target differs from the forecasting context, where one is given a high fidelity (but imperfect model. In the forecasting context, the information provided by additional members depends also on the fidelity of the model, the ensemble formation scheme (data assimilation, the ensemble interpretation and the nature of the observational noise. The effect of increasing the ensemble size is quantified by
A statistical analysis of three ensembles of crop model responses totemperature and CO2concentration
Makowski, D; Asseng, S; Ewert, F.
2015-01-01
levels, and can thus be used to calculate temperature and [CO2] thresholds leading to yield loss or yield gain, without re-running the original complex crop models. Our approach is illustrated with three yield datasets simulated by 19 maize models, 26 wheat models, and 13 rice models. Several statistical......Ensembles of process-based crop models are increasingly used to simulate crop growth for scenarios of temperature and/or precipitation changes corresponding to different projections of atmospheric CO2 concentrations. This approach generates large datasets with thousands of simulated crop yield data...... in the simulation protocols. Here we demonstrate that statistical models based on random-coefficient regressions are able to emulate ensembles of process-based crop models. An important advantage of the proposed statistical models is that they can interpolate between temperature levels and between CO2 concentration...
Forecasting European cold waves based on subsampling strategies of CMIP5 and Euro-CORDEX ensembles
Cordero-Llana, Laura; Braconnot, Pascale; Vautard, Robert; Vrac, Mathieu; Jezequel, Aglae
2016-04-01
Forecasting future extreme events under the present changing climate represents a difficult task. Currently there are a large number of ensembles of simulations for climate projections that take in account different models and scenarios. However, there is a need for reducing the size of the ensemble to make the interpretation of these simulations more manageable for impact studies or climate risk assessment. This can be achieved by developing subsampling strategies to identify a limited number of simulations that best represent the ensemble. In this study, cold waves are chosen to test different approaches for subsampling available simulations. The definition of cold waves depends on the criteria used, but they are generally defined using a minimum temperature threshold, the duration of the cold spell as well as their geographical extend. These climate indicators are not universal, highlighting the difficulty of directly comparing different studies. As part of the of the CLIPC European project, we use daily surface temperature data obtained from CMIP5 outputs as well as Euro-CORDEX simulations to predict future cold waves events in Europe. From these simulations a clustering method is applied to minimise the number of ensembles required. Furthermore, we analyse the different uncertainties that arise from the different model characteristics and definitions of climate indicators. Finally, we will test if the same subsampling strategy can be used for different climate indicators. This will facilitate the use of the subsampling results for a wide number of impact assessment studies.
Ensemble Forecasting of Major Solar Flares
Guerra, J A; Uritsky, V M
2015-01-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm by applying a decision threshold $P_{th}$ to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the probabilities and events time series from 13 recent solar active regions (2012 - 2014), we found that a linear combination of probabilities can improve both probabilistic and categorical forecasts. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of $P_{th}$. According to the maximum values of HSS, a performance-based weights ...
Quantum data compression of a qubit ensemble.
Rozema, Lee A; Mahler, Dylan H; Hayat, Alex; Turner, Peter S; Steinberg, Aephraim M
2014-10-17
Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compressing three photonic qubits into two. This protocol sheds light on the subtle differences between quantum and classical information. Furthermore, since data compression stores all of the available information about the quantum state in fewer physical qubits, it could allow for a vast reduction in the amount of quantum memory required to store a quantum ensemble, making even today's limited quantum memories far more powerful than previously recognized.
Rotationally invariant ensembles of integrable matrices.
Yuzbashyan, Emil A; Shastry, B Sriram; Scaramazza, Jasen A
2016-05-01
We construct ensembles of random integrable matrices with any prescribed number of nontrivial integrals and formulate integrable matrix theory (IMT)-a counterpart of random matrix theory (RMT) for quantum integrable models. A type-M family of integrable matrices consists of exactly N-M independent commuting N×N matrices linear in a real parameter. We first develop a rotationally invariant parametrization of such matrices, previously only constructed in a preferred basis. For example, an arbitrary choice of a vector and two commuting Hermitian matrices defines a type-1 family and vice versa. Higher types similarly involve a random vector and two matrices. The basis-independent formulation allows us to derive the joint probability density for integrable matrices, similar to the construction of Gaussian ensembles in the RMT.
Face Recognition using Optimal Representation Ensemble
Li, Hanxi; Gao, Yongsheng
2011-01-01
Recently, the face recognizers based on linear representations have been shown to deliver state-of-the-art performance. In real-world applications, however, face images usually suffer from expressions, disguises and random occlusions. The problematic facial parts undermine the validity of the linear-subspace assumption and thus the recognition performance deteriorates significantly. In this work, we address the problem in a learning-inference-mixed fashion. By observing that the linear-subspace assumption is more reliable on certain face patches rather than on the holistic face, some Bayesian Patch Representations (BPRs) are randomly generated and interpreted according to the Bayes' theory. We then train an ensemble model over the patch-representations by minimizing the empirical risk w.r.t the "leave-one-out margins". The obtained model is termed Optimal Representation Ensemble (ORE), since it guarantees the optimality from the perspective of Empirical Risk Minimization. To handle the unknown patterns in tes...
Dysonian dynamics of the Ginibre ensemble.
Burda, Zdzislaw; Grela, Jacek; Nowak, Maciej A; Tarnowski, Wojciech; Warchoł, Piotr
2014-09-05
We study the time evolution of Ginibre matrices whose elements undergo Brownian motion. The non-Hermitian character of the Ginibre ensemble binds the dynamics of eigenvalues to the evolution of eigenvectors in a nontrivial way, leading to a system of coupled nonlinear equations resembling those for turbulent systems. We formulate a mathematical framework allowing simultaneous description of the flow of eigenvalues and eigenvectors, and we unravel a hidden dynamics as a function of a new complex variable, which in the standard description is treated as a regulator only. We solve the evolution equations for large matrices and demonstrate that the nonanalytic behavior of the Green's functions is associated with a shock wave stemming from a Burgers-like equation describing correlations of eigenvectors. We conjecture that the hidden dynamics that we observe for the Ginibre ensemble is a general feature of non-Hermitian random matrix models and is relevant to related physical applications.
Rotationally invariant ensembles of integrable matrices
Yuzbashyan, Emil A.; Shastry, B. Sriram; Scaramazza, Jasen A.
2016-05-01
We construct ensembles of random integrable matrices with any prescribed number of nontrivial integrals and formulate integrable matrix theory (IMT)—a counterpart of random matrix theory (RMT) for quantum integrable models. A type-M family of integrable matrices consists of exactly N -M independent commuting N ×N matrices linear in a real parameter. We first develop a rotationally invariant parametrization of such matrices, previously only constructed in a preferred basis. For example, an arbitrary choice of a vector and two commuting Hermitian matrices defines a type-1 family and vice versa. Higher types similarly involve a random vector and two matrices. The basis-independent formulation allows us to derive the joint probability density for integrable matrices, similar to the construction of Gaussian ensembles in the RMT.
Eigenstate Gibbs Ensemble in Integrable Quantum Systems
Nandy, Sourav; Das, Arnab; Dhar, Abhishek
2016-01-01
The Eigenstate Thermalization Hypothesis implies that for a thermodynamically large system in one of its eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of {\\it relevant} conserved quantities. In a generic system, only the energy plays that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and hence the reduced density matrix requires specification of an infinite number of parameters (Generalized Gibbs Ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density, that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e. requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then s...
ABCD of Beta Ensembles and Topological Strings
Krefl, Daniel
2012-01-01
We study beta-ensembles with Bn, Cn, and Dn eigenvalue measure and their relation with refined topological strings. Our results generalize the familiar connections between local topological strings and matrix models leading to An measure, and illustrate that all those classical eigenvalue ensembles, and their topological string counterparts, are related one to another via various deformations and specializations, quantum shifts and discrete quotients. We review the solution of the Gaussian models via Macdonald identities, and interpret them as conifold theories. The interpolation between the various models is plainly apparent in this case. For general polynomial potential, we calculate the partition function in the multi-cut phase in a perturbative fashion, beyond tree-level in the large-N limit. The relation to refined topological string orientifolds on the corresponding local geometry is discussed along the way.
Support Vector Machine Ensemble Based on Genetic Algorithm
LI Ye; YIN Ru-po; CAI Yun-ze; XU Xiao-ming
2006-01-01
Support vector machines (SVMs) have been introduced as effective methods for solving classification problems.However, due to some limitations in practical applications,their generalization performance is sometimes far from the expected level. Therefore, it is meaningful to study SVM ensemble learning. In this paper, a novel genetic algorithm based ensemble learning method, namely Direct Genetic Ensemble (DGE), is proposed. DGE adopts the predictive accuracy of ensemble as the fitness function and searches a good ensemble from the ensemble space. In essence, DGE is also a selective ensemble learning method because the base classifiers of the ensemble are selected according to the solution of genetic algorithm. In comparison with other ensemble learning methods, DGE works on a higher level and is more direct. Different strategies of constructing diverse base classifiers can be utilized in DGE.Experimental results show that SVM ensembles constructed by DGE can achieve better performance than single SVMs,bagged and boosted SVM ensembles. In addition, some valuable conclusions are obtained.
Various multistage ensembles for prediction of heating energy consumption
Radisa Jovanovic
2015-04-01
Full Text Available Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gloshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as member of an ensemble. Two conventional averaging methods for obtaining ensemble output are applied; simple and weighted. In order to achieve better prediction results, multistage ensemble is investigated. As second level, adaptive neuro-fuzzy inference system with various clustering and membership functions are used to aggregate the selected ensemble members. Feedforward neural network in second stage is also analyzed. It is shown that using ensemble of neural networks can predict heating energy consumption with better accuracy than the best trained single neural network, while the best results are achieved with multistage ensemble.
Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation
Kudekar, Shrinivas; Urbanke, Ruediger
2012-01-01
We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble/code which is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that under b...
Analysis and optimization of weighted ensemble sampling
Aristoff, David
2016-01-01
We give a mathematical framework for weighted ensemble (WE) sampling, a binning and resampling technique for efficiently computing probabilities in molecular dynamics. We prove that WE sampling is unbiased in a very general setting that includes adaptive binning. We show that when WE is used for stationary calculations in tandem with a Markov state model (MSM), the MSM can be used to optimize the allocation of replicas in the bins.
Quantum Data Compression of a Qubit Ensemble
Rozema, Lee A.; Mahler, Dylan H.; Hayat, Alex; Turner, Peter S.; Steinberg, Aephraim M.
2014-01-01
Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compre...
Multiscale ensemble filtering for reservoir engineering applications
Lawniczak, W.; Hanea, R.G.; Heemink, A.; Mclaughlin, D.
2009-01-01
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo a...
Statistical Ensemble Theory of Gompertz Growth Model
Takuya Yamano
2009-11-01
Full Text Available An ensemble formulation for the Gompertz growth function within the framework of statistical mechanics is presented, where the two growth parameters are assumed to be statistically distributed. The growth can be viewed as a self-referential process, which enables us to use the Bose-Einstein statistics picture. The analytical entropy expression pertain to the law can be obtained in terms of the growth velocity distribution as well as the Gompertz function itself for the whole process.
Seismology of an Ensemble of ZZ Ceti Stars
Clemens, J C; Dunlap, Bart H; Hermes, J J
2016-01-01
We combine all the reliably-measured eigenperiods for hot, short-period ZZ Ceti stars onto one diagram and show that it has the features expected from evolutionary and pulsation theory. To make a more detailed comparison with theory we concentrate on a subset of 16 stars for which rotational splitting or other evidence gives clues to the spherical harmonic index (l) of the modes. The suspected l=1 periods in this subset of stars form a pattern of consecutive radial overtones that allow us to conduct ensemble seismology using published theoretical model grids. We find that the best-matching models have hydrogen layer masses most consistent with the canonically thick limit calculated from nuclear burning. We also find that the evolutionary models with masses and temperatures from spectroscopic fits cannot correctly reproduce the periods of the k=1 to 4 mode groups in these stars, and speculate that the mass of the helium layer in the models is too large.
Interplanetary magnetic field ensemble at 1 AU
Matthaeus, W.H.; Goldstein, M.L.; King, J.H.
1985-04-01
A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.
Gradient Flow Analysis on MILC HISQ Ensembles
Brown, Nathan [Washington U., St. Louis; Bazavov, Alexei [Brookhaven; Bernard, Claude [Washington U., St. Louis; DeTar, Carleton [Utah U.; Foley, Justin [Utah U.; Gottlieb, Steven [Indiana U.; Heller, Urs M. [APS, New York; Hetrick, J. E. [U. Pacific, Stockton; Komijani, Javad [Washington U., St. Louis; Laiho, Jack [Syracuse U.; Levkova, Ludmila [Utah U.; Oktay, M. B. [Utah U.; Sugar, Robert [UC, Santa Barbara; Toussaint, Doug [Arizona U.; Van de Water, Ruth S. [Fermilab; Zhou, Ran [Fermilab
2014-11-14
We report on a preliminary scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales $\\sqrt{t_0}/a$ and $w_0/a$ are computed using Symanzik flow and the cloverleaf definition of $\\langle E \\rangle$ on each ensemble. Then both scales and the meson masses $aM_\\pi$ and $aM_K$ are adjusted for mistunings in the charm mass. Using a combination of continuum chiral perturbation theory and a Taylor series ansatz in the lattice spacing, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. Our preliminary results are $\\sqrt{t_0} = 0.1422(7)$fm and $w_0 = 0.1732(10)$fm. We also find the continuum mass-dependence of $w_0$.
Cavity Cooling for Ensemble Spin Systems
Cory, David
2015-03-01
Recently there has been a surge of interest in exploring thermodynamics in quantum systems where dissipative effects can be exploited to perform useful work. One such example is quantum state engineering where a quantum state of high purity may be prepared by dissipative coupling through a cold thermal bath. This has been used to great effect in many quantum systems where cavity cooling has been used to cool mechanical modes to their quantum ground state through coupling to the resolved sidebands of a high-Q resonator. In this talk we explore how these techniques may be applied to an ensemble spin system. This is an attractive process as it potentially allows for parallel remove of entropy from a large number of quantum systems, enabling an ensemble to achieve a polarization greater than thermal equilibrium, and potentially on a time scale much shorter than thermal relaxation processes. This is achieved by the coupled angular momentum subspaces of the ensemble behaving as larger effective spins, overcoming the weak individual coupling of individual spins to a microwave resonator. Cavity cooling is shown to cool each of these subspaces to their respective ground state, however an additional algorithmic step or dissipative process is required to couple between these subspaces and enable cooling to the full ground state of the joint system.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-05-08
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Gradient Flow Analysis on MILC HISQ Ensembles
Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Komijani, J; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2014-01-01
We report on a preliminary scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales $\\sqrt{t_0}/a$ and $w_0/a$ are computed using Symanzik flow and the cloverleaf definition of $\\langle E \\rangle$ on each ensemble. Then both scales and the meson masses $aM_\\pi$ and $aM_K$ are adjusted for mistunings in the charm mass. Using a combination of continuum chiral perturbation theory and a Taylor series ansatz in the lattice spacing, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. Our preliminary results are $\\sqrt{t_0} = 0.1422(7)$fm and $w_0 = 0.1732(10)$fm. We also find the continuum mass-dependence of $w_0$.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-12-03
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Ensemble transform sensitivity method for adaptive observations
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Multivariate localization methods for ensemble Kalman filtering
S. Roh
2015-05-01
Full Text Available In ensemble Kalman filtering (EnKF, the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
On large deviations for ensembles of distributions
Khrychev, D A [Moscow State Institute of Radio-Engineering, Electronics and Automation (Technical University), Moscow (Russian Federation)
2013-11-30
The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each ε>0 the nonempty set P{sub ε} of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set (P{sub ε}, ε>0), hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Dynamic Analogue Initialization for Ensemble Forecasting
LI Shan; RONG Xingyao; LIU Yun; LIU Zhengyu; Klaus FRAEDRICH
2013-01-01
This paper introduces a new approach for the initialization of ensemble numerical forecasting:Dynamic Analogue Initialization (DAI).DAI assumes that the best model state trajectories for the past provide the initial conditions for the best forecasts in the future.As such,DAI performs the ensemble forecast using the best analogues from a full size ensemble.As a pilot study,the Lorenz63 and Lorenz96 models were used to test DAI's effectiveness independently.Results showed that DAI can improve the forecast significantly.Especially in lower-dimensional systems,DAI can reduce the forecast RMSE by ～50％ compared to the Monte Carlo forecast (MC).This improvement is because DAI is able to recognize the direction of the analysis error through the embedding process and therefore selects those good trajectories with reduced initial error.Meanwhile,a potential improvement of DAI is also proposed,and that is to find the optimal range of embedding time based on the error's growing speed.
Extended ensemble theory, spontaneous symmetry breaking, and phase transitions
Xiao, Ming-wen
2006-09-01
In this paper, as a personal review, we suppose a possible extension of Gibbs ensemble theory so that it can provide a reasonable description of phase transitions and spontaneous symmetry breaking. The extension is founded on three hypotheses, and can be regarded as a microscopic edition of the Landau phenomenological theory of phase transitions. Within its framework, the stable state of a system is determined by the evolution of order parameter with temperature according to such a principle that the entropy of the system will reach its minimum in this state. The evolution of order parameter can cause a change in representation of the system Hamiltonian; different phases will realize different representations, respectively; a phase transition amounts to a representation transformation. Physically, it turns out that phase transitions originate from the automatic interference among matter waves as the temperature is cooled down. Typical quantum many-body systems are studied with this extended ensemble theory. We regain the Bardeen Cooper Schrieffer solution for the weak-coupling superconductivity, and prove that it is stable. We find that negative-temperature and laser phases arise from the same mechanism as phase transitions, and that they are unstable. For the ideal Bose gas, we demonstrate that it will produce Bose Einstein condensation (BEC) in the thermodynamic limit, which confirms exactly Einstein's deep physical insight. In contrast, there is no BEC either within the phonon gas in a black body or within the ideal photon gas in a solid body. We prove that it is not admissible to quantize the Dirac field by using Bose Einstein statistics. We show that a structural phase transition belongs physically to the BEC happening in configuration space, and that a double-well anharmonic system will undergo a structural phase transition at a finite temperature. For the O(N)-symmetric vector model, we demonstrate that it will yield spontaneous symmetry breaking and produce
EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data
Shu, Qingya; Guo, Hanqi; Che, Limei; Yuan, Xiaoru; Liu, Junfeng; Liang, Jie
2016-04-19
We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based on ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.
Arctic sea ice area changes in CMIP3 and CMIP5 climate models’ ensembles
V. A. Semenov
2017-01-01
Full Text Available The shrinking Arctic sea ice cover observed during the last decades is probably the clearest manifestation of ongoing climate change. While climate models in general reproduce the sea ice retreat in the Arctic during the 20th century and simulate further sea ice area loss during the 21st century in response to anthropogenic forcing, the models suffer from large biases and the results exhibit considerable spread. Here, we compare results from the two last generations of climate models, CMIP3 and CMIP5, with respect to total and regional Arctic sea ice change. Different characteristics of sea ice area (SIA in March and September have been analysed for the Entire Arctic, Central Arctic and Barents Sea. Further, the sensitivity of SIA to changes in Northern Hemisphere (NH temperature is investigated and dynamical links between SIA and some atmospheric variability modes are assessed.CMIP3 (SRES A1B and CMIP5 (RCP8.5 models not only simulate a coherent decline of the Arctic SIA but also depict consistent changes in the SIA seasonal cycle. The spatial patterns of SIC variability improve in CMIP5 ensemble, most noticeably in summer when compared to HadISST1 data. A better simulation of summer SIA in the Entire Arctic by CMIP5 models is accompanied by a slightly increased bias for winter season in comparison to CMIP3 ensemble. SIA in the Barents Sea is strongly overestimated by the majority of CMIP3 and CMIP5 models, and projected SIA changes are characterized by a high uncertainty. Both CMIP ensembles depict a significant link between the SIA and NH temperature changes indicating that a part of inter-ensemble SIA spread comes from different temperature sensitivity to anthropogenic forcing. The results suggest that, in general, a sensitivity of SIA to external forcing is enhanced in CMIP5 models. Arctic SIA interannual variability in the end of the 20th century is on average well simulated by both ensembles. To the end of the 21st century, September
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Oh, Seok-Geun; Suh, Myoung-Seok
2016-03-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn
2016-04-01
Running hydrological models with precipitation and temperature ensemble forcing to generate ensembles of streamflow is a commonly used method in operational hydrology. Evaluations of streamflow ensembles have however revealed that the ensembles are biased with respect to both mean and spread. Thus postprocessing of the ensembles is needed in order to improve the forecast skill. The aims of this study is (i) to to evaluate how postprocessing of streamflow ensembles works for Norwegian catchments within different hydrological regimes and to (ii) demonstrate how post processed streamflow ensembles are used operationally by a hydropower producer. These aims were achieved by postprocessing forecasted daily discharge for 10 lead-times for 20 catchments in Norway by using EPS forcing from ECMWF applied the semi-distributed HBV-model dividing each catchment into 10 elevation zones. Statkraft Energi uses forecasts from these catchments for scheduling hydropower production. The catchments represent different hydrological regimes. Some catchments have stable winter condition with winter low flow and a major flood event during spring or early summer caused by snow melting. Others has a more mixed snow-rain regime, often with a secondary flood season during autumn, and in the coastal areas, the stream flow is dominated by rain, and the main flood season is autumn and winter. For post processing, a Bayesian model averaging model (BMA) close to (Kleiber et al 2011) is used. The model creates a predictive PDF that is a weighted average of PDFs centered on the individual bias corrected forecasts. The weights are here equal since all ensemble members come from the same model, and thus have the same probability. For modeling streamflow, the gamma distribution is chosen as a predictive PDF. The bias correction parameters and the PDF parameters are estimated using a 30-day sliding window training period. Preliminary results show that the improvement varies between catchments depending
Stanzel, Philipp; Kling, Harald
2017-04-01
EURO-CORDEX Regional Climate Model (RCM) data are available as result of the latest initiative of the climate modelling community to provide ever improved simulations of past and future climate in Europe. The spatial resolution of the climate models increased from 25 x 25 km in the previous coordinated initiative, ENSEMBLES, to 12 x 12 km in the CORDEX EUR-11 simulations. This higher spatial resolution might yield improved representation of the historic climate, especially in complex mountainous terrain, improving applicability in impact studies. CORDEX scenario simulations are based on Representative Concentration Pathways, while ENSEMBLES applied the SRES greenhouse gas emission scenarios. The new emission scenarios might lead to different projections of future climate. In this contribution we explore these two dimensions of development from ENSEMBLES to CORDEX - representation of the past and projections for the future - in the context of a hydrological climate change impact study for the Danube River. We replicated previous hydrological simulations that used ENSEMBLES data of 21 RCM simulations under SRES A1B emission scenario as meteorological input data (Kling et al. 2012), and now applied CORDEX EUR-11 data of 16 RCM simulations under RCP4.5 and RCP8.5 emission scenarios. The climate variables precipitation and temperature were used to drive a monthly hydrological model of the upper Danube basin upstream of Vienna (100,000 km2). RCM data was bias corrected and downscaled to the scale of hydrological model units. Results with CORDEX data were compared with results with ENSEMBLES data, analysing both the driving meteorological input and the resulting discharge projections. Results with CORDEX data show no general improvement in the accuracy of representing historic climatic features, despite the increase in spatial model resolution. The tendency of ENSEMBLES scenario projections of increasing precipitation in winter and decreasing precipitation in summer is
Ensemble data assimilation with an adjusted forecast spread
Sabrina Rainwater
2013-04-01
Full Text Available Ensemble data assimilation typically evolves an ensemble of model states whose spread is intended to represent the algorithm's uncertainty about the state of the physical system that produces the data. The analysis phase treats the forecast ensemble as a random sample from a background distribution, and it transforms the ensemble according to the background and observation error statistics to provide an appropriate sample for the next forecast phase. We find that in the presence of model nonlinearity and model error, it can be fruitful to rescale the ensemble spread prior to the forecast and then reverse this rescaling after the forecast. We call this approach forecast spread adjustment, which we discuss and test in this article using an ensemble Kalman filter and a 2005 model due to Lorenz. We argue that forecast spread adjustment provides a tunable parameter, that is, complementary to covariance inflation, which cumulatively increases ensemble spread to compensate for underestimation of uncertainty. We also show that as the adjustment parameter approaches zero, the filter approaches the extended Kalman filter if the ensemble size is sufficiently large. We find that varying the adjustment parameter can significantly reduce analysis and forecast errors in some cases. We evaluate how the improvement provided by forecast spread adjustment depends on ensemble size, observation error and model error. Our results indicate that the technique is most effective for small ensembles, small observation error and large model error, though the effectiveness depends significantly on the nature of the model error.
De praeceptis ferendis: good practice in multi-model ensembles
I. Kioutsioukis
2014-06-01
Full Text Available Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. There exists a trade-off between diversity and accuracy for which one cannot be gained without expenses of the other. Theoretical aspects like the bias-variance-covariance decomposition and the accuracy-diversity decomposition are linked together and support the importance of creating ensemble that incorporates both the elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi model ensembles. To this end we have devised statistical tools that can be used for diagnostic evaluation of ensemble modelling products, complementing existing operational methods.
Spin storage in quantum dot ensembles and single quantum dots
Heiss, Dominik
2009-10-15
This thesis deals with the investigation of spin relaxation of electrons and holes in small ensembles of self-assembled quantum dots using optical techniques. Furthermore, a method to detect the spin orientation in a single quantum dot was developed in the framework of this thesis. A spin storage device was used to optically generate oriented electron spins in small frequency selected quantum dot ensembles using circularly polarized optical excitation. The spin orientation can be determined by the polarization of the time delayed electroluminescence signal generated by the device after a continuously variable storage time. The degree of spin polarized initialization was found to be limited to 0.6 at high magnetic fields, where anisotropic effects are compensated. The spin relaxation was directly measured as a function of magnetic field, lattice temperature and s-shell transition energy of the quantum dot by varying the spin storage time up to 30 ms. Very long spin lifetimes are obtained with a lower limit of T{sub 1}=20 ms at B=4 T and T=1 K. A strong magnetic field dependence T{sub 1}{proportional_to}B{sup -5} has been observed for low temperatures of T=1 K which weakens as the temperature is increased. In addition, the temperature dependence has been determined with T{sub 1}{proportional_to}T{sup -1}. The characteristic dependencies on magnetic field and temperature lead to the identification of the spin relaxation mechanism, which is governed by spin-orbit coupling and mediated by single phonon scattering. This finding is qualitatively supported by the energy dependent measurements. The investigations were extended to a modified device design that enabled studying the spin relaxation dynamics of heavy holes in self-assembled quantum dots. The measurements show a polarization memory effect for holes with up to 0.1 degree of polarization. Furthermore, investigations of the time dynamics of the hole spin relaxation reveal surprisingly long lifetimes T{sub 1}{sup h
Sriver, Ryan L.; Forest, Chris E.; Keller, Klaus
2015-07-01
The uncertainties surrounding the initial conditions in Earth system models can considerably influence interpretations about climate trends and variability. Here we present results from a new climate change ensemble experiment using the Community Earth System Model (CESM) to analyze the effect of internal variability on regional climate variables that are relevant for decision making. Each simulation is initialized from a unique and dynamically consistent model state sampled from a ~10,000 year fully coupled equilibrium simulation, which captures the internal unforced variability of the coupled Earth system. We find that internal variability has a sizeable contribution to the modeled ranges of temperature and precipitation. The effects increase for more localized regions. The ensemble exhibits skill in simulating key regional climate processes relevant to decision makers, such as seasonal temperature variability and extremes. The presented ensemble framework and results can provide useful resources for uncertainty quantification, integrated assessment, and climate risk management.
Díaz-Méndez, Rogelio; Mezzacapo, Fabio; Lechner, Wolfgang; Cinti, Fabio; Babaev, Egor; Pupillo, Guido
2017-02-01
At low enough temperatures and high densities, the equilibrium configuration of an ensemble of ultrasoft particles is a self-assembled, ordered, cluster crystal. In the present Letter, we explore the out-of-equilibrium dynamics for a two-dimensional realization, which is relevant to superconducting materials with multiscale intervortex forces. We find that, for small temperatures following a quench, the suppression of the thermally activated particle hopping hinders the ordering. This results in a glass transition for a monodispersed ensemble, for which we derive a microscopic explanation in terms of an "effective polydispersity" induced by multiscale interactions. This demonstrates that a vortex glass can form in clean systems of thin films of "type-1.5" superconductors. An additional setup to study this physics can be layered superconducting systems, where the shape of the effective vortex-vortex interactions can be engineered.
ZHENG Fei; ZHU Jiang
2010-01-01
The initial ensemble perturbations for an ensemble data assimilation system are expected to reasonably sample model uncertainty at the time of analysis to further reduce analysis uncertainty.Therefore,the careful choice of an initial ensemble perturbation method that dynamically cycles ensemble perturbations is required for the optimal performance of the system.Based on the multivariate empirical onhogonal function(MEOF)method,a new ensemble initialization scheme is developed to generate balanced initial perturbations for the ensemble Kalman filter(EnKF)data assimilation,with a reasonable consideration of the physical relationships between different model variables.The scheme is applied in assimilation experiments with a global spectral atmospheric model and with real observations.The proposed perturbation method is compared to the commonly used method of spatially-correlated random perturbations.The comparisons show that the model uncertainties prior to the first analysis time,which are forecasted from the balanced ensemble initial fields,maintain a much more reasonable spread and a more accurate forecast error covariance than those from the randomly perturbed initial fields.The analysis results are further improved by the balanced ensemble initialization scheme due to more accurate background information.Also,a 20-day continuous assimilation experiment shows that the ensemble spreads for each model variable are still retained in reasonable ranges without considering additional perturbations or inflations during the assimilation cycles,while the ensemble spreads from the randomly perturbed initialization scheme decrease and collapse rapidly.
J.-P. Chaboureau
2012-08-01
Full Text Available Ensemble forecasts at kilometre scale of two severe storms over the Mediterranean region are verified against satellite observations. In complement to assessing the forecasts against ground-based measurements, brightness temperature (BT images are computed from forecast fields and directly compared to BTs observed from satellite. The so-called model-to-satellite approach is very effective in identifying systematic errors in the prediction of cloud cover for BTs in the infrared window and in verifying the forecasted convective activity with BTs in the microwave range. This approach is combined with the calculation of meteorological scores for an objective evaluation of ensemble forecasts. The application of the approach is shown in the context of two Mediterranean case studies, a tropical-like storm and a heavy precipitating event. Assessment of cloud cover and convective activity using satellite observations in the infrared (10.8 μm and microwave regions (183–191 GHz provides results consistent with other traditional methods using rainfall measurements. In addition, for the tropical-like storm, differences among forecasts occur much earlier in terms of cloud cover and deep convective activity than they do in terms of deepening and track. Further, the underdispersion of the ensemble forecasts of the two high-impact weather events is easily identified with satellite diagnostics. This suggests that such an approach could be a useful method for verifying ensemble forecasts, particularly in data-sparse regions.
Switching Between the NVT and NpT Ensembles Using the Reweighting and Reconstruction Scheme
Kadoura, Ahmad Salim
2015-06-01
Recently, we have developed several techniques in order to accelerate Monte Carlo (MC) molecular simulations. For that purpose, two strategies were followed. In the first, new algorithms were proposed as a set of early rejection schemes performing faster than the conventional algorithm while preserving the accuracy of the method. On the other hand, a reweighting and reconstruction scheme was introduced that is capable of retrieving primary quantities and second derivative properties at several thermodynamic conditions from a single MC Markov chain. The latter scheme, was first developed to extrapolate quantities in NV T ensemble for struc- tureless Lennard-Jones particles. However, it is evident that for most real life applications the NpT ensemble is more convenient, as pressure and temperature are usually known. Therefore, in this paper we present an extension to the reweighting and reconstruction method to solve NpT problems utilizing the same Markov chains generated by the NV T ensemble simulations. Eventually, the new approach allows elegant switching between the two ensembles for several quantities at a wide range of neighboring thermodynamic conditions.
Quantum entanglement at ambient conditions in a macroscopic solid-state spin ensemble.
Klimov, Paul V; Falk, Abram L; Christle, David J; Dobrovitski, Viatcheslav V; Awschalom, David D
2015-11-01
Entanglement is a key resource for quantum computers, quantum-communication networks, and high-precision sensors. Macroscopic spin ensembles have been historically important in the development of quantum algorithms for these prospective technologies and remain strong candidates for implementing them today. This strength derives from their long-lived quantum coherence, strong signal, and ability to couple collectively to external degrees of freedom. Nonetheless, preparing ensembles of genuinely entangled spin states has required high magnetic fields and cryogenic temperatures or photochemical reactions. We demonstrate that entanglement can be realized in solid-state spin ensembles at ambient conditions. We use hybrid registers comprising of electron-nuclear spin pairs that are localized at color-center defects in a commercial SiC wafer. We optically initialize 10(3) identical registers in a 40-μm(3) volume (with [Formula: see text] fidelity) and deterministically prepare them into the maximally entangled Bell states (with 0.88 ± 0.07 fidelity). To verify entanglement, we develop a register-specific quantum-state tomography protocol. The entanglement of a macroscopic solid-state spin ensemble at ambient conditions represents an important step toward practical quantum technology.
Superparamagnetic blocking of an ensemble of magnetite nanoparticles upon interparticle interactions
Balaev, D. A.; Semenov, S. V.; Dubrovskiy, A. A.; Yakushkin, S. S.; Kirillov, V. L.; Martyanov, O. N.
2017-10-01
We report on the effect of interparticle magnetic interactions in an ensemble of superparamagnetic magnetite particles with an average size of 8.4 nm dispersed in the diamagnetic matrix on the blocking of this ensemble in external magnetic field. The two limit cases are investigated: the case of strongly interacting particles, when the value of magnetic dipole-dipole interaction between particles is comparable with the energy of other interactions in the ensemble (the interparticle distance is similar to the nanoparticle diameter) and the case of almost noninteracting particles distant from each other by about ten particle diameters. We demonstrate that the experimental dependence of the blocking temperature on external field is described well within the model [1], in which the density of particles in a nonmagnetic medium is taken into account and the correlation value depends on external magnetic field. The model for describing the magnetic properties of a disperse nanoparticle ensemble is proposed, which makes corrections related to the particle size and mean dipole-dipole interaction energy for the anisotropy constant. The surface magnetic anisotropy of Fe3O4 particles and parameters of the interparticle coupling are estimated.
Seasonal hydrological ensemble forecasts over Europe
Arnal, Louise; Wetterhall, Fredrik; Pappenberger, Florian
2015-04-01
Seasonal forecasts have an important socio-economic value in hydro-meteorological forecasting. The applications are for example hydropower management, spring flood prediction and water resources management. The latter includes prediction of low flows, primordial for navigation, water quality assessment, droughts and agricultural water needs. Traditionally, seasonal hydrological forecasts are done using the observed discharge from previous years, so called Ensemble Streamflow Prediction (ESP). With the recent increasing development of seasonal meteorological forecasts, the incentive for developing and improving seasonal hydrological forecasts is great. In this study, a seasonal hydrological forecast, driven by the ECMWF's System 4 (SEA), was compared with an ESP of modelled discharge using observations. The hydrological model used for both forecasts was the LISFLOOD model, run over a European domain with a spatial resolution of 5 km. The forecasts were produced from 1990 until the present time, with a daily time step. They were issued once a month with a lead time of seven months. The SEA forecasts are constituted of 15 ensemble members, extended to 51 members every three months. The ESP forecasts comprise 20 ensembles and served as a benchmark for this comparative study. The forecast systems were compared using a diverse set of verification metrics, such as continuous ranked probability scores, ROC curves, anomaly correlation coefficients and Nash-Sutcliffe efficiency coefficients. These metrics were computed over several time-scales, ranging from a weekly to a six-months basis, for each season. The evaluation enabled the investigation of several aspects of seasonal forecasting, such as limits of predictability, timing of high and low flows, as well as exceedance of percentiles. The analysis aimed at exploring the spatial distribution and timely evolution of the limits of predictability.
Seasonal hydrological ensemble forecasts over Europe
Arnal, Louise; Wetterhall, Fredrik; Stephens, Elisabeth; Cloke, Hannah; Pappenberger, Florian
2016-04-01
This study investigates the limits of predictability in dynamical seasonal discharge forecasting, in both space and time, over Europe. Seasonal forecasts have an important socioeconomic value. Applications are numerous and cover hydropower management, spring flood prediction, low flow prediction for navigation and agricultural water demands. Additionally, the constant increase in NWP skill for longer lead times and the predicted increase in the intensity and frequency of hydro-meteorological extremes, have amplified the incentive to promote and further improve hydrological forecasts on sub-seasonal to seasonal timescales. In this study, seasonal hydrological forecasts (SEA), driven by the ECMWF's System 4 in hindcast mode, were analysed against an Ensemble Streamflow Prediction (ESP) benchmark. The ESP was forced with an ensemble of resampled historical meteorological observations and started with perfect initial conditions. Both forecasts were produced by the LISFLOOD model, run on the pan-European scale with a spatial resolution of 5 by 5 km. The forecasts were issued monthly on a daily time step, from 1990 until the current time, up to a lead time of 7 months. The seasonal discharge forecasts were analysed against the ESP on a catchment scale in terms of their accuracy, skill and sharpness, using a diverse set of verification metrics (e.g. KGE, CRPSS and ROC). Additionally, a reverse-ESP was constructed by forcing the LISFLOOD model with a single perfect meteorological set of observations and initiated from an ensemble of resampled historical initial conditions. The comparison of the ESP with the reverse-ESP approach enabled the identification of the respective contribution of meteorological forcings and hydrologic initial conditions errors to seasonal discharge forecasting uncertainties in Europe. These results could help pinpoint target elements of the forecasting chain which, after being improved, could lead to substantial increase in discharge predictability
A Framework for Non-Equilibrium Statistical Ensemble Theory
BI Qiao; HE Zu-Tan; LIU Jie
2011-01-01
Since Gibbs synthesized a general equilibrium statistical ensemble theory, many theorists have attempted to generalized the Gibbsian theory to non-equilibrium phenomena domain, however the status of the theory of nonequilibrium phenomena can not be said as firm as well established as the Gibbsian ensemble theory. In this work, we present a framework for the non-equilibrium statistical ensemble formalism based on a subdynamic kinetic equation (SKE) rooted from the Brussels-Austin school and followed by some up-to-date works. The constructed key is to use a similarity transformation between Gibbsian ensembles formalism based on Liouville equation and the subdynamic ensemble formalism based on the SKE. Using this formalism, we study the spin-Boson system, as cases of weak coupling or strongly coupling, and obtain the reduced density operators for the Canonical ensembles easily.
Cluster ensembles, quantization and the dilogarithm
Fock, Vladimir; Goncharov, Alexander B.
2009-01-01
, possibly degenerate, and the space has a Poisson structure. The map is compatible with these structures. The dilogarithm together with its motivic and quantum avatars plays a central role in the cluster ensemble structure. We define a non-commutative -deformation of the -space. When is a root of unity...... group . It is an algebraic-geometric avatar of higher Teichmüller theory on related to . We suggest that there exists a duality between the and spaces. In particular, we conjecture that the tropical points of one of the spaces parametrise a basis in the space of functions on the Langlands dual space. We...
Accurate Atom Counting in Mesoscopic Ensembles
Hume, D B; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K
2013-01-01
Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.
Accurate Atom Counting in Mesoscopic Ensembles
Hume, D. B.; Stroescu, I.; Joos, M.; Muessel, W.; Strobel, H.; Oberthaler, M. K.
2013-12-01
Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.
Supervised Ensemble Classification of Kepler Variable Stars
Bass, Gideon
2016-01-01
Variable star analysis and classification is an important task in the understanding of stellar features and processes. While historically classifications have been done manually by highly skilled experts, the recent and rapid expansion in the quantity and quality of data has demanded new techniques, most notably automatic classification through supervised machine learning. We present an expansion of existing work on the field by analyzing variable stars in the {\\em Kepler} field using an ensemble approach, combining multiple characterization and classification techniques to produce improved classification rates. Classifications for each of the roughly 150,000 stars observed by {\\em Kepler} are produced separating the stars into one of 14 variable star classes.
Modeling Coordination Problems in a Music Ensemble
Frimodt-Møller, Søren R.
2008-01-01
This paper considers in general terms, how musicians are able to coordinate through rational choices in a situation of (temporary) doubt in an ensemble performance. A fictitious example involving a 5-bar development in an unknown piece of music is analyzed in terms of epistemic logic, more...... specifically a multi-agent system, where it is shown that perfect coordination can only be certain to take place if the musicians have common knowledge of certain rules of the composition. We subsequently argue, however, that the musicians need not agree on the central features of the piece of music in order...
Asymptotic expansions for the Gaussian unitary ensemble
Haagerup, Uffe; Thorbjørnsen, Steen
2012-01-01
Let g : R ¿ C be a C8-function with all derivatives bounded and let trn denote the normalized trace on the n × n matrices. In Ref. 3 Ercolani and McLaughlin established asymptotic expansions of the mean value ¿{trn(g(Xn))} for a rather general class of random matrices Xn, including the Gaussian...... Unitary Ensemble (GUE). Using an analytical approach, we provide in the present paper an alternative proof of this asymptotic expansion in the GUE case. Specifically we derive for a random matrix Xn that where k is an arbitrary positive integer. Considered as mappings of g, we determine the coefficients...
Accurate atom counting in mesoscopic ensembles.
Hume, D B; Stroescu, I; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K
2013-12-20
Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.
Climate change hotspots in the CMIP5 global climate model ensemble
Diffenbaugh, Noah S; Giorgi, Filippo
2012-01-01
We use a statistical metric of multi-dimensional climate change to quantify the emergence of global climate change hotspots in the CMIP5 climate model ensemble. Our hotspot metric extends previous work through the inclusion of extreme seasonal temperature and precipitation, which exert critical influence on climate change impacts. The results identify areas of the Amazon, the Sahel and tropical West Africa, Indonesia, and the Tibetan Plateau as persistent regional climate change hotspots thro...
Spin Squeezing of Atomic Ensembles via Nuclear-Electronic Spin Entanglement
Fernholz, Thomas; Krauter, Hanna; Jensen, K.
2008-01-01
We demonstrate spin squeezing in a room temperature ensemble of ≈1012 cesium atoms using their internal structure, where the necessary entanglement is created between nuclear and electronic spins of each individual atom. This state provides improvement in measurement sensitivity beyond the standard...... quantum limit for quantum memory experiments and applications in quantum metrology and is thus a complementary alternative to spin squeezing obtained via interatom entanglement. Squeezing of the collective spin is verified by quantum state tomography....
Spin Squeezing of Atomic Ensembles via Nuclear-Electronic Spin Entanglement
Fernholz, Thomas; Krauter, Hanna; Jensen, K.;
2008-01-01
We demonstrate spin squeezing in a room temperature ensemble of ≈1012 cesium atoms using their internal structure, where the necessary entanglement is created between nuclear and electronic spins of each individual atom. This state provides improvement in measurement sensitivity beyond the standard...... quantum limit for quantum memory experiments and applications in quantum metrology and is thus a complementary alternative to spin squeezing obtained via interatom entanglement. Squeezing of the collective spin is verified by quantum state tomography....
Validation of the Air Force Weather Agency Ensemble Prediction Systems
2014-03-27
to deterministic models. Results from ensemble weather input into operational risk management ( ORM ) destruction of enemy air defense simulations...growth during the analysis period (Toth and Kalnay, 1993; Toth and Kalnay, 1997). From this framework the ensemble transform bred vector, ensemble...features. Each of its 10 members is run independently using different configurations in the framework of the Weather Research and Forecasting (WRF
Unconditional two-mode squeezing of separated atomic ensembles
Parkins, A S; Solano, E
2005-01-01
We propose schemes for the unconditional preparation of a two-mode squeezed state of effective bosonic modes realized in a pair of atomic ensembles interacting collectively with optical cavity and laser fields. The scheme uses Raman transitions between stable atomic ground states and under ideal conditions produces pure entangled states in the steady state. The scheme works both for ensembles confined within a single cavity and for ensembles confined in separate, cascaded cavities.
The Moment Convergence Rates for Largest Eigenvalues of β Ensembles
Jun Shan XIE
2013-01-01
The paper focuses on the largest eigenvalues of the β-Hermite ensemble and theβ-Laguerre ensemble.In particular,we obtain the precise moment convergence rates of their largest eigenvalues.The results are motivated by the complete convergence for partial sums of i.i.d.random variables,and the proofs depend on the small deviations for largest eigenvalues of the β ensembles and tail inequalities of the general β Tracy-Widom law.
Extracting Value from Ensembles for Cloud-Free Forecasting
2011-09-01
for Medium range Weather Forecasting EMean Ensemble mean ETR Ensemble transform with rescaling EUMETSAT European Organization for the...transform method (ET) with rescaling ( ETR ) to define the initial atmospheric uncertainty (Wei et al. 2008). Adapted from the ET method devised by...variances of each grid point to further restrain the initial ensemble spread. The ETR method replaced the breeding method in GEFS during NCEP’s May
On sequential observation processing in localized ensemble Kalman filters
Nerger, Lars
2014-01-01
The different variants of current ensemble square-root Kalman filters assimilate either all observations at once or perform a sequence in which batches of observations or each single observation is assimilated. The sequential observation processing is used in filter algorithms like the ensemble adjustment Kalman filter (EAKF) and the ensemble square-root filter (EnSRF) and can result in computationally efficient algorithms because matrix inversions in the observation space are reduced to the ...
Quantifying Uncertainty of Wind Power Production Through an Analog Ensemble
Shahriari, M.; Cervone, G.
2016-12-01
The Analog Ensemble (AnEn) method is used to generate probabilistic weather forecasts that quantify the uncertainty in power estimates at hypothetical wind farm locations. The data are from the NREL Eastern Wind Dataset that includes more than 1,300 modeled wind farms. The AnEn model uses a two-dimensional grid to estimate the probability distribution of wind speed (the predictand) given the values of predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind. The meteorological data is taken from the NCEP GFS which is available on a 0.25 degree grid resolution. The methodology first divides the data into two classes: training period and verification period. The AnEn selects a point in the verification period and searches for the best matching estimates (analogs) in the training period. The predictand value at those analogs are the ensemble prediction for the point in the verification period. The model provides a grid of wind speed values and the uncertainty (probability index) associated with each estimate. Each wind farm is associated with a probability index which quantifies the degree of difficulty to estimate wind power. Further, the uncertainty in estimation is related to other factors such as topography, land cover and wind resources. This is achieved by using a GIS system to compute the correlation between the probability index and geographical characteristics. This study has significant applications for investors in renewable energy sector especially wind farm developers. Lower level of uncertainty facilitates the process of submitting bids into day ahead and real time electricity markets. Thus, building wind farms in regions with lower levels of uncertainty will reduce the real-time operational risks and create a hedge against volatile real-time prices. Further, the links between wind estimate uncertainty and factors such as topography and wind resources, provide wind farm developers with valuable
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing
Pinson, Pierre; Madsen, Henrik
. The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...
2011-09-01
variable is appropriately sized for the region ( UCAR 2010). 4. An Isotropic Joint-Ensemble Majumdar and Finochio (2010) develop a probability circle...Forecasting, 22, 671–675. UCAR , cited 2010: NCEP Perturbation Method. [Available online at http://www.meted.ucar.edu/nwp/pcu2/ens_matrix
The MIP Ensemble Simulation: Local Ensemble Statistics in the Cosmic Web
Aragon-Calvo, M A
2012-01-01
Here we present a novel N-body simulation technique that allows us to compute ensemble statistics on a local basis, directly relating halo properties to their environment. This is achieved by the use of an ensemble simulation in which the otherwise independent realizations share the same fluctuations above a given cut-off scale. This produces a constrained ensemble where the LSS is common to all realizations while having an independent halo population. By generating a large number of semi-independent realizations we can effectively increase the local halo density by an arbitrary factor thus breaking the fundamental limit of the finite halo density (for a given halo mass range) determined by the halo mass function. This technique allows us to compute local ensemble statistics of the matter/halo distribution at a particular position in space, removing the intrinsic stochasticity in the halo formation process and directly relating halo properties to their environment. This is a major improvement over global desc...
Deformed Gaussian Orthogonal Ensemble Analysis of the Interacting Boson Model
Pato, M P; Lima, C L; Hussein, M S; Alhassid, Y
1994-01-01
A Deformed Gaussian Orthogonal Ensemble (DGOE) which interpolates between the Gaussian Orthogonal Ensemble and a Poissonian Ensemble is constructed. This new ensemble is then applied to the analysis of the chaotic properties of the low lying collective states of nuclei described by the Interacting Boson Model (IBM). This model undergoes a transition order-chaos-order from the $SU(3)$ limit to the $O(6)$ limit. Our analysis shows that the quantum fluctuations of the IBM Hamiltonian, both of the spectrum and the eigenvectors, follow the expected behaviour predicted by the DGOE when one goes from one limit to the other.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Adiabatic Passage of Collective Excitations in Atomic Ensembles
LIYong; MIAOYuan-Xiu; SUNChang-Pu
2004-01-01
We describe a theoretical scheme that allows for transfer of quantum states of atomic collective excitation between two macroscopic atomic ensembles localized in two spatially-separated domains. The conception is based on the occurrence of double-exciton dark states due to the collective destructive quantum interference of the emissions from the two atomic ensembles. With an adiabatically coherence manipulation for the atom-field couplings by stimulated Rmann scattering, the dark states will extrapolate from an exciton state of an ensemble to that of another. This realizes the transport of quantum information among atomic ensembles.
Adiabatic Passage of Collective Excitations in Atomic Ensembles
LI Yong; MIAO Yuan-Xiu; SUN Chang-Pu
2004-01-01
We describe a theoretical scheme that allows for transfer of quantum states of atomic collective excitation between two macroscopic atomic ensembles localized in two spatially-separated domains. The conception is based on the occurrence of double-exciton dark states due to the collective destructive quantum interference of the emissions from the two atomic ensembles. With an adiabatically coherence manipulation for the atom-field couplings by stimulated Ramann scattering, the dark states will extrapolate from an exciton state of an ensemble to that of another. This realizes the transport of quantum information among atomic ensembles.
Relation between native ensembles and experimental structures of proteins
Best, R. B.; Lindorff-Larsen, Kresten; DePristo, M. A.
2006-01-01
Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of "high-sequence similarity Protein Data Bank" (HSP) structures and consider the extent to which such ensembles represent the structural...... Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest...
Fractional exclusion statistics and the Random Matrix Boson Ensemble
Hernández-Quiroz, Saul; Benet, Luis; Flores, Jorge; Cocho, Germinal
2012-01-01
The k-body Gaussian Embedded Ensemble of Random Matrices is considered for N bosons distributed on two single-particle levels. When k = N, the ensemble is equivalent to the Gaussian Orthogonal Ensemble (GOE), and when k = 2 it corresponds to the Two-body Random Ensemble (TBRE) for bosons. It is shown that the energy spectrum leads to a rank function which is of the form of a discrete generalized beta distribution. The same distribution is obtained assuming N non-interacting quasiparticles that obey the fractional exclusion statistics introduced by Haldane two decades ago.
Cluster Ensemble-based Image Segmentation
Xiaoru Wang
2013-07-01
Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new\tcluster ensemble-based image\tsegmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.
Online cross-validation-based ensemble learning.
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-05-04
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Nanobiosensing with Arrays and Ensembles of Nanoelectrodes
Najmeh Karimian
2016-12-01
Full Text Available Since the first reports dating back to the mid-1990s, ensembles and arrays of nanoelectrodes (NEEs and NEAs, respectively have gained an important role as advanced electroanalytical tools thank to their unique characteristics which include, among others, dramatically improved signal/noise ratios, enhanced mass transport and suitability for extreme miniaturization. From the year 2000 onward, these properties have been exploited to develop electrochemical biosensors in which the surfaces of NEEs/NEAs have been functionalized with biorecognition layers using immobilization modes able to take the maximum advantage from the special morphology and composite nature of their surface. This paper presents an updated overview of this field. It consists of two parts. In the first, we discuss nanofabrication methods and the principles of functioning of NEEs/NEAs, focusing, in particular, on those features which are important for the development of highly sensitive and miniaturized biosensors. In the second part, we review literature references dealing the bioanalytical and biosensing applications of sensors based on biofunctionalized arrays/ensembles of nanoelectrodes, focusing our attention on the most recent advances, published in the last five years. The goal of this review is both to furnish fundamental knowledge to researchers starting their activity in this field and provide critical information on recent achievements which can stimulate new ideas for future developments to experienced scientists.
Hsaing Waing: Classical Ensemble of Myanmar
Chalermkit Kengkeaw
2013-09-01
Full Text Available Hsaing Waing is a classical music ensemble and a prominent culturalidentity of Myanmar. The Hsaing Waing ensemble consists of many instruments such as the Pat Waing, Muang Hsaing, Hne, Chauk Lon Bat, Byaung, Wa, Wallet Kok, Yakin, Si, and Mong. The earliest historical record of the Hsaing Waing is in 1544 where the Pat Waing and possibly the Hsaing Waing, was in royal service at the court of King Tabinshwehti of the Taungoo dynasty and prospered under the Kaunbaun dynasty up to colonial rule. During colonization, Hsaing Waing’s popularity declined but other innovations were introduced such as modern recording mediums and broadcasts which transferred the popularity of Hsaing Waing to a broader public audience and brought innovation to religious music, ceremonial rituals, fusion of westernmusical instruments such as the piano, violin and mandolin. The wealth of knowledge and numbers of connoisseur during the Kaunbaun dynasty led to the transfer of knowledge to many apprentices which were responsible for the development and adaptation and continuation of Hsaing Waing during colonization, socialism and independence. The transfer of knowledge was carried out by previous generations through apprentices, family members, close relatives and inspired individuals. The factors for the successful inheritance of Hsaing Waing are management, education, musicians and opportunity.
Ensemble Kalman filtering with residual nudging
Xiaodong Luo
2012-10-01
Full Text Available Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF by (in effect adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Ensemble Kalman filtering with residual nudging
Luo, X.
2012-10-03
Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Application of a 3-D Super Ensemble to ocean forecast
Lenartz, F.; Barth, A.; Beckers, J.-M.; Vandenbulcke, L.; Rixen, M.
2009-04-01
Super Ensemble (SE) techniques have recently allowed improving the forecast of various important oceanographic parameters, such as the significant wave height, the speed of sound or the surface drift, by correcting the prediction at a single or multiple locations, where data were available during the whole training period. However, nowadays common observation systems, such as satellite imagery or drifters, do not always provide information at the exact same locations, hence it is necessary to generalize the approach in order to take benefit of every image or track available. In this study, we try and apply a SE, fed with remote sensing and gliders data, to 3-D hydrodynamic models. The basic idea on which rely the SE methods is that a certain combination of several model runs and possibly data could yield better results than just one single model, even if it has a higher temporal or spatial resolution. As the most efficient techniques are the ones using observations, they rapidly developed and increased in complexity by copying what had been done in the data assimilation community; getting from the simple ensemble mean of the model outputs to their linear combination based on a particle filter. In our present study, we have decided to use the Kalman filter (KF) as it alleviates the need of an a priori determination of the training period length, and does not require the run of a very large ensemble of members. In addition, we apply it in a 3-D framework in order to take benefit of the spatial information contained by each source of measurements. For example, satellite images of sea surface temperature (SST) are very useful to correct the value of this parameter, but depending on the structure of the water column, it can also give a precious guess of how warm or cold is the ocean at 20 m deep. In our experiment the domain of interest is the Ligurian Sea during the last week of September, when part of the set-up for the CalVal08 campaign (SiC Charles Trees) had
Jun Kyung KAY; Hyun Mee KIM; Young-Youn PARK; Joohyung SON
2013-01-01
Using the Met Office Global and Regional Ensemble Prediction System (MOGREPS) implemented at the Korea Meteorological Administration (KMA),the effect of doubling the ensemble size on the performance of ensemble prediction in the warm season was evaluated.Because a finite ensemble size causes sampling error in the full forecast probability distribution function (PDF),ensemble size is closely related to the efficiency of the ensemble prediction system.Prediction capability according to doubling the ensemble size was evaluated by increasing the number of ensembles from 24 to 48 in MOGREPS implemented at the KMA.The initial analysis perturbations generated by the Ensemble Transform Kalman Filter (ETKF) were integrated for 10 days from 22 May to 23 June 2009.Several statistical verification scores were used to measure the accuracy,reliability,and resolution of ensemble probabilistic forecasts for 24 and 48 ensemble member forecasts.Even though the results were not significant,the accuracy of ensemble prediction improved slightly as ensemble size increased,especially for longer forecast times in the Northern Hemisphere.While increasing the number of ensemble members resulted in a slight improvement in resolution as forecast time increased,inconsistent results were obtained for the scores assessing the reliability of ensemble prediction.The overall performance of ensemble prediction in terms of accuracy,resolution,and reliability increased slightly with ensemble size,especially for longer forecast times.
Regional climate models downscaling in the Alpine area with Multimodel SuperEnsemble
D. Cane
2012-08-01
Full Text Available The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulations, however, even when obtained with Regional Climate Models (RCMs, are affected by strong errors where compared with observations, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization.
Therefore the aim of this work is reducing these model biases using a specific post processing statistic technique to obtain a more suitable projection of climate change scenarios in the Alpine area.
For our purposes we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piedmont daily temperature and precipitation observations (1957–present were carefully gridded on a 14-km grid over Piedmont Region with an Optimal Interpolation technique.
Hence, we applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period.
We propose also the first application to RCMS of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces well the monthly behaviour of precipitation in the control period.
Ensemble models on palaeoclimate to predict India's groundwater challenge
Partha Sarathi Datta
2013-09-01
Full Text Available In many parts of the world, freshwater crisis is largely due to increasing water consumption and pollution by rapidly growing population and aspirations for economic development, but, ascribed usually to the climate. However, limited understanding and knowledge gaps in the factors controlling climate and uncertainties in the climate models are unable to assess the probable impacts on water availability in tropical regions. In this context, review of ensemble models on δ18O and δD in rainfall and groundwater, 3H- and 14C- ages of groundwater and 14C- age of lakes sediments helped to reconstruct palaeoclimate and long-term recharge in the North-west India; and predict future groundwater challenge. The annual mean temperature trend indicates both warming/cooling in different parts of India in the past and during 1901–2010. Neither the GCMs (Global Climate Models nor the observational record indicates any significant change/increase in temperature and rainfall over the last century, and climate change during the last 1200 yrs BP. In much of the North-West region, deep groundwater renewal occurred from past humid climate, and shallow groundwater renewal from limited modern recharge over the past decades. To make water management to be more responsive to climate change, the gaps in the science of climate change need to be bridged.
Ensemble of regional climate model projections for Ireland
Nolan, Paul; McGrath, Ray
2016-04-01
The method of Regional Climate Modelling (RCM) was employed to assess the impacts of a warming climate on the mid-21st-century climate of Ireland. The RCM simulations were run at high spatial resolution, up to 4 km, thus allowing a better evaluation of the local effects of climate change. Simulations were run for a reference period 1981-2000 and future period 2041-2060. Differences between the two periods provide a measure of climate change. To address the issue of uncertainty, a multi-model ensemble approach was employed. Specifically, the future climate of Ireland was simulated using three different RCMs, driven by four Global Climate Models (GCMs). To account for the uncertainty in future emissions, a number of SRES (B1, A1B, A2) and RCP (4.5, 8.5) emission scenarios were used to simulate the future climate. Through the ensemble approach, the uncertainty in the RCM projections can be partially quantified, thus providing a measure of confidence in the predictions. In addition, likelihood values can be assigned to the projections. The RCMs used in this work are the COnsortium for Small-scale MOdeling-Climate Limited-area Modelling (COSMO-CLM, versions 3 and 4) model and the Weather Research and Forecasting (WRF) model. The GCMs used are the Max Planck Institute's ECHAM5, the UK Met Office's HadGEM2-ES, the CGCM3.1 model from the Canadian Centre for Climate Modelling and the EC-Earth consortium GCM. The projections for mid-century indicate an increase of 1-1.6°C in mean annual temperatures, with the largest increases seen in the east of the country. Warming is enhanced for the extremes (i.e. hot or cold days), with the warmest 5% of daily maximum summer temperatures projected to increase by 0.7-2.6°C. The coldest 5% of night-time temperatures in winter are projected to rise by 1.1-3.1°C. Averaged over the whole country, the number of frost days is projected to decrease by over 50%. The projections indicate an average increase in the length of the growing season
Thermal Insulation Distribution Pattern of Layered Clothing Ensemble
李俊; 韦鸿发; 刘岩; 张渭源
2004-01-01
With a thermal manikin, the distribution pattern of thermal insulation in multi-layered clothing ensemble is studied. It is found that the thermal insulation of multi-layered clothing ensemble has certain statistical relationship with the thermal insulation of each layer, and the prediction equation has been established.
Building Identity in Collegiate Midlevel Choral Ensembles: The Director's Perspective
Major, Marci L.
2017-01-01
This study was designed to explore the director's perspective on the role organizational images play in social identity development in midlevel choral ensembles. Using a phenomenological methodology, I interviewed 10 current or former directors of midlevel choral ensembles from eight midwestern U.S. colleges and universities. Directors cited…
Calculation of the chemical potential in the Gibbs ensemble
Smit, B.; Frenkel, D.
1989-01-01
An expression for the chemical potential in the Gibbs ensemble is derived. For finite system sizes this expression for the chemical potential differs system-atically from Widom's test particle insertion method for the N, V, T ensemble. In order to compare these two methods for calculating the chemic
Stochastic and dynamical downscaling of ensemble precipitation forecasts
Brussolo, E.; von Hardenberg, J.; Rebora, N.
2009-04-01
Forecasting hydrogeological risk in small basins requires quantitative forecasts and an estimate of the probability of occurrence of severe, localized precipitation events at spatial scales of the order of tens of kilometers or less, significantly smaller than those currently provided by large scale, global, ensemble forecasting systems (EPS). Dynamically based forecasts at these scales can be obtained extending EPS scenarios with high-resolution, non-hydrostatic, limited area ensemble prediction systems. An alternative is represented by the direct application of stochastic downscaling techniques to the large scale ensemble forecasts. This work compares the performances of these two very different ensemble forecast downscaling approaches. To this purpose we consider ensemble forecasts provided by the ECMWF EPS, downscaled in space using the RainFARM stochastic technique [1], and ensembles of forecasts obtained from the COSMO-LEPS limited area prediction system (which also uses ECMWF EPS ensemble members as boundary conditions), for three intense precipitation events over northern Italy in 2006. The statistical properties of the fields produced with these two techniques are compared and the skill of the resulting ensembles is verified against direct precipitation measurements from a dense network of rain gauges. Reference: 1. Rebora, N., L. Ferraris, J. von Hardenberg, and A. Provenzale, 2006: The RainFARM: Rainfall Downscaling by a Filtered AutoRegressive Model. J. Hydrometeorol., 7, 724-738.
A Comparison of Ensemble Kalman Filters for Storm Surge Assimilation
Altaf, Muhammad
2014-08-01
This study evaluates and compares the performances of several variants of the popular ensembleKalman filter for the assimilation of storm surge data with the advanced circulation (ADCIRC) model. Using meteorological data from Hurricane Ike to force the ADCIRC model on a domain including the Gulf ofMexico coastline, the authors implement and compare the standard stochastic ensembleKalman filter (EnKF) and three deterministic square root EnKFs: the singular evolutive interpolated Kalman (SEIK) filter, the ensemble transform Kalman filter (ETKF), and the ensemble adjustment Kalman filter (EAKF). Covariance inflation and localization are implemented in all of these filters. The results from twin experiments suggest that the square root ensemble filters could lead to very comparable performances with appropriate tuning of inflation and localization, suggesting that practical implementation details are at least as important as the choice of the square root ensemble filter itself. These filters also perform reasonably well with a relatively small ensemble size, whereas the stochastic EnKF requires larger ensemble sizes to provide similar accuracy for forecasts of storm surge.
Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling
Vrugt, J.A.; Diks, C.G.H.; Clark, M.
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t
Conductor and Ensemble Performance Expressivity and State Festival Ratings
Price, Harry E.; Chang, E. Christina
2005-01-01
This study is the second in a series examining the relationship between conducting and ensemble performance. The purpose was to further examine the associations among conductor, ensemble performance expressivity, and festival ratings. Participants were asked to rate the expressivity of video-only conducting and parallel audio-only excerpts from a…
Ensembles and their modules as objects of cartosemiotic inquiry
Hansgeorg Schlichtmann
2010-01-01
Full Text Available The structured set of signs in a map face -- here called map-face aggregate or MFA -- and the associated marginal notes make up an ensemble of modules or components (modular ensemble. Such ensembles are recognized where groups of entries are intuitively viewed as complex units, which includes the case that entries are consulted jointly and thus are involved in the same process of sign reception. Modular ensembles are amenable to semiotic study, just as are written or pictorial stories. Four kinds (one of them mentioned above are discussed in detail, two involving single MFAs, the other two being assemblages of maps, such as atlases. In terms of their internal structure, two types are recognized: the combinate (or grouping, in which modules are directly linked by combinatorial relations (example above, and the cumulate (or collection (of documents, in which modules are indirectly related through some conceptual commonality (example: series of geological maps. The discussion then turns to basic points concerning modular ensembles (identification of a module, internal organization of an ensemble, and characteristics which establish an ensemble as a unit and further to a few general semiotic concepts as they relate to the present research. Since this paper originated as a reaction to several of A. Wolodtschenko’s recent publications, it concludes with comments on some of his arguments which pertain to modular ensembles.
An iterative ensemble Kalman filter for reservoir engineering applications
Krymskaya, M.V.; Hanea, R.G.; Verlaan, M.
2009-01-01
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the
Ensemble Forecast: A New Approach to Uncertainty and Predictability
无
2005-01-01
Ensemble techniques have been used to generate daily numerical weather forecasts since the 1990s in numerical centers around the world due to the increase in computation ability. One of the main purposes of numerical ensemble forecasts is to try to assimilate the initial uncertainty (initial error) and the forecast uncertainty (forecast error) by applying either the initial perturbation method or the multi-model/multiphysics method. In fact, the mean of an ensemble forecast offers a better forecast than a deterministic (or control) forecast after a short lead time (3 5 days) for global modelling applications. There is about a 1-2-day improvement in the forecast skill when using an ensemble mean instead of a single forecast for longer lead-time. The skillful forecast (65% and above of an anomaly correlation) could be extended to 8 days (or longer) by present-day ensemble forecast systems. Furthermore, ensemble forecasts can deliver a probabilistic forecast to the users, which is based on the probability density function (PDF)instead of a single-value forecast from a traditional deterministic system. It has long been recognized that the ensemble forecast not only improves our weather forecast predictability but also offers a remarkable forecast for the future uncertainty, such as the relative measure of predictability (RMOP) and probabilistic quantitative precipitation forecast (PQPF). Not surprisingly, the success of the ensemble forecast and its wide application greatly increase the confidence of model developers and research communities.
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Competitive Learning Neural Network Ensemble Weighted by Predicted Performance
Ye, Qiang
2010-01-01
Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…
Exact ensemble density-functional theory for excited states
Yang, Zeng-hui; Pribram-Jones, Aurora; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A
2014-01-01
We construct exact Kohn-Sham potentials for the ensemble density-functional theory (EDFT) of excited states from the ground and excited states of helium. The exchange-correlation potential is compared with current approximations, which miss prominent features. The ensemble derivative discontinuity is tested, and the virial theorem is proven and illustrated.
An iterative ensemble Kalman filter for reservoir engineering applications
Krymskaya, M.V.; Hanea, R.G.; Verlaan, M.
2009-01-01
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the distributi
Competitive Learning Neural Network Ensemble Weighted by Predicted Performance
Ye, Qiang
2010-01-01
Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Kazuo Saito
2012-01-01
Full Text Available The effect of lateral boundary perturbations (LBPs on the mesoscale breeding (MBD method and the local ensemble transform Kalman filter (LETKF as the initial perturbations generators for mesoscale ensemble prediction systems (EPSs was examined. A LBPs method using the Japan Meteorological Agency's (JMA's operational one-week global ensemble prediction was developed and applied to the mesoscale EPS of the Meteorological Research Institute for the World Weather Research Programme, Beijing 2008 Olympics Research and Development Project. The amplitude of the LBPs was adjusted based on the ensemble spread statistics considering the difference of the forecast times of the JMA's one-week EPS and the associated breeding/ensemble Kalman filter (EnKF cycles. LBPs in the ensemble forecast increase the ensemble spread and improve the accuracy of the ensemble mean forecast. In the MBD method, if LBPs were introduced in its breeding cycles, the growth rate of the generated bred vectors is increased, and the ensemble spread and the root mean square errors (RMSEs of the ensemble mean are further improved in the ensemble forecast. With LBPs in the breeding cycles, positional correspondences to the meteorological disturbances and the orthogonality of the bred vectors are improved. Brier Skill Scores (BSSs also showed a remarkable effect of LBPs in the breeding cycles. LBPs showed a similar effect with the LETKF. If LBPs were introduced in the EnKF data assimilation cycles, the ensemble spread, ensemble mean accuracy, and BSSs for precipitation were improved, although the relative advantage of LETKF as the initial perturbations generator against MDB was not necessarily clear. LBPs in the EnKF cycles contribute not to the orthogonalisation but to prevent the underestimation of the forecast error near the lateral boundary.The accuracy of the LETKF analyses was compared with that of the mesoscale 4D-VAR analyses. With LBPs in the LETKF cycles, the RMSEs of the
Generalized ensemble method applied to study systems with strong first order transitions
Małolepsza, E.; Kim, J.; Keyes, T.
2015-09-01
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.
Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble
Bicci, Alberto
2016-12-01
In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.
Data assimilation the ensemble Kalman filter
Evensen, Geir
2007-01-01
Data Assimilation comprehensively covers data assimilation and inverse methods, including both traditional state estimation and parameter estimation. This text and reference focuses on various popular data assimilation methods, such as weak and strong constraint variational methods and ensemble filters and smoothers. It is demonstrated how the different methods can be derived from a common theoretical basis, as well as how they differ and/or are related to each other, and which properties characterize them, using several examples. Rather than emphasize a particular discipline such as oceanography or meteorology, it presents the mathematical framework and derivations in a way which is common for any discipline where dynamics is merged with measurements. The mathematics level is modest, although it requires knowledge of basic spatial statistics, Bayesian statistics, and calculus of variations. Readers will also appreciate the introduction to the mathematical methods used and detailed derivations, which should b...
Predicting protein dynamics from structural ensembles
Copperman, J
2015-01-01
The biological properties of proteins are uniquely determined by their structure and dynamics. A protein in solution populates a structural ensemble of metastable configurations around the global fold. From overall rotation to local fluctuations, the dynamics of proteins can cover several orders of magnitude in time scales. We propose a simulation-free coarse-grained approach which utilizes knowledge of the important metastable folded states of the protein to predict the protein dynamics. This approach is based upon the Langevin Equation for Protein Dynamics (LE4PD), a Langevin formalism in the coordinates of the protein backbone. The linear modes of this Langevin formalism organize the fluctuations of the protein, so that more extended dynamical cooperativity relates to increasing energy barriers to mode diffusion. The accuracy of the LE4PD is verified by analyzing the predicted dynamics across a set of seven different proteins for which both relaxation data and NMR solution structures are available. Using e...
China’s First Modern Dance Ensemble
1992-01-01
After four years’hardwork by both Chineseand foreign artiststhe Guangdong ExperimentalModern Dance Ensemble,thefirst of its kind in China,wasestablished on June 6,1992,in the Friendship Theater ofGuangzhou.Ms、Yang Meiqi,a famous Chinese folk danceeducator,was chosen as headand Mr.Willy Tsao,a famousyoung Hongkong dancer,as artisticdirector.China’s Central TV Stationreported the news.Recommended by Ms.ChiangChing,a Chinese-American dancer,Yang Meiqi went to Durham,NorthCarolina,in the United States in thesummer of 1986.to attend the Amer-ican Dance Festival.The moderndances put on during the festivalfascinated her with their universal“language,”flexible movement,cho-reography and scientific training.“Isn’t this just What China’s dance
ARM Cloud Retrieval Ensemble Data Set (ACRED)
Zhao, C; Xie, S; Klein, SA; McCoy, R; Comstock, JM; Delanoë, J; Deng, M; Dunn, M; Hogan, RJ; Jensen, MP; Mace, GG; McFarlane, SA; O’Connor, EJ; Protat, A; Shupe, MD; Turner, D; Wang, Z
2011-09-12
This document describes a new Atmospheric Radiation Measurement (ARM) data set, the ARM Cloud Retrieval Ensemble Data Set (ACRED), which is created by assembling nine existing ground-based cloud retrievals of ARM measurements from different cloud retrieval algorithms. The current version of ACRED includes an hourly average of nine ground-based retrievals with vertical resolution of 45 m for 512 layers. The techniques used for the nine cloud retrievals are briefly described in this document. This document also outlines the ACRED data availability, variables, and the nine retrieval products. Technical details about the generation of ACRED, such as the methods used for time average and vertical re-grid, are also provided.
An educational model for ensemble streamflow simulation and uncertainty analysis
A. AghaKouchak
2013-02-01
Full Text Available This paper presents the hands-on modeling toolbox, HBV-Ensemble, designed as a complement to theoretical hydrology lectures, to teach hydrological processes and their uncertainties. The HBV-Ensemble can be used for in-class lab practices and homework assignments, and assessment of students' understanding of hydrological processes. Using this modeling toolbox, students can gain more insights into how hydrological processes (e.g., precipitation, snowmelt and snow accumulation, soil moisture, evapotranspiration and runoff generation are interconnected. The educational toolbox includes a MATLAB Graphical User Interface (GUI and an ensemble simulation scheme that can be used for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity. HBV-Ensemble was administered in a class for both in-class instruction and a final project, and students submitted their feedback about the toolbox. The results indicate that this educational software had a positive impact on students understanding and knowledge of uncertainty in hydrological modeling.
Ensemble inequivalence: Landau theory and the ABC model
Cohen, O.; Mukamel, D.
2012-12-01
It is well known that systems with long-range interactions may exhibit different phase diagrams when studied within two different ensembles. In many of the previously studied examples of ensemble inequivalence, the phase diagrams differ only when the transition in one of the ensembles is first order. By contrast, in a recent study of a generalized ABC model, the canonical and grand-canonical ensembles of the model were shown to differ even when they both exhibit a continuous transition. Here we show that the order of the transition where ensemble inequivalence may occur is related to the symmetry properties of the order parameter associated with the transition. This is done by analyzing the Landau expansion of a generic model with long-range interactions. The conclusions drawn from the generic analysis are demonstrated for the ABC model by explicit calculation of its Landau expansion.
Excitations and benchmark ensemble density functional theory for two electrons
Pribram-Jones, Aurora; Trail, John R; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A
2014-01-01
A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange (SEHX), is derived. Exact conditions that are proven include the signs of the correlation energy components, the virial theorem for both exchange and correlation, and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.
Excitations and benchmark ensemble density functional theory for two electrons
Pribram-Jones, Aurora; Burke, Kieron [Department of Chemistry, University of California-Irvine, Irvine, California 92697 (United States); Yang, Zeng-hui; Ullrich, Carsten A. [Department of Physics and Astronomy, University of Missouri, Columbia, Missouri 65211 (United States); Trail, John R.; Needs, Richard J. [Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, Cambridge CB3 0HE (United Kingdom)
2014-05-14
A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two-electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange, is derived. Exact conditions that are proven include the signs of the correlation energy components and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.
Adaptive calibration of (u,v)‐wind ensemble forecasts
Pinson, Pierre
2012-01-01
Ensemble forecasts of (u,v)‐wind are of crucial importance for a number of decision‐making problems related to e.g. air traffic control, ship routeing and energy management. The skill of these ensemble forecasts as generated by NWP‐based models can be maximised by correcting for their lack...... of sufficient reliability. The original framework introduced here allows for an adaptive bivariate calibration of these ensemble forecasts. The originality of this methodology lies in the fact that calibrated ensembles still consist of a set of (space–time) trajectories, after translation and dilation...... on the adaptive calibration of ECMWF ensemble forecasts of (u,v)‐wind at 10 m above ground level over Europe over a three‐year period between December 2006 and December 2009. Substantial improvements in (bivariate) reliability and in various deterministic/probabilistic scores are observed. Finally, the maps...
Induced Ginibre ensemble of random matrices and quantum operations
Fischmann, J; Khoruzhenko, B A; Sommers, H -J; Zyczkowski, K
2011-01-01
A generalisation of the Ginibre ensemble of non-Hermitian random square matrices is introduced. The corresponding probability measure is induced by the ensemble of rectangular Gaussian matrices via a quadratisation procedure. We derive the joint probability density of eigenvalues for such induced Ginibre ensemble and study various spectral correlation functions for complex and real matrices, and analyse universal behaviour in the limit of large dimensions. In this limit the eigenvalues of the induced Ginibre ensemble cover uniformly a ring in the complex plane. The real induced Ginibre ensemble is shown to be useful to describe statistical properties of evolution operators associated with random quantum operations, for which the dimensions of the input state and the output state do differ.
Discrete post-processing of total cloud cover ensemble forecasts
Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian
2017-04-01
This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.
Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble
Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin
2017-04-01
Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V
Halu, Arda; Bianconi, Ginestra
2013-01-01
Spatial networks range from the brain networks, to transportation networks and infrastructures. Recently interacting and multiplex networks are attracting great attention because their dynamics and robustness cannot be understood without treating at the same time several networks. Here we present maximal entropy ensembles of spatial multiplex and spatial interacting networks that can be used in order to model spatial multilayer network structures and to build null models of real datasets. We show that spatial multiplex naturally develop a significant overlap of the links, a noticeable property of many multiplexes that can affect significantly the dynamics taking place on them. Additionally, we characterize ensembles of spatial interacting networks and we analyse the structure of interacting airport and railway networks in India, showing the effect of space in determining the link probability.
Future changes in the West African Monsoon: A COSMO-CLM and RCA4 multimodel ensemble study
Anders, Ivonne; Gbobaniyi, Emiola
2014-05-01
In this multi-model multi-ensemble study, we intercompare results from two regional climate simulation ensembles to see how well they reproduce the known main features of the West African Monsoon (WAM). Each ensemble was created under the ongoing CORDEX-Africa activities by using the regional climate models (RCA4 and COSMO-CLM) to downscale four coupled atmosphere ocean general circulation models (AOGCMs), namely, CNRM-CM5, HadGEM2-ES, EC-EARTH, and MPI-ESM-LR. Spatial resolution of the driving AOGCMs varies from about 1° to 3° while all regional simulations are at the same 0.44° resolution. Future climate projections from the RCP8.5 scenario are analyzed and inter-compared for both ensembles in order to assess deviations and uncertainties. The main focus in our analysis is on the projected WAM rainy season statistics. We look at projected changes in onset and cessation, total precipitation and temperature toward the end of the century (2071-2100) for different time scales spanning seasonal climatologies, annual cycles and interannual variability, and a number of spatial scales covering the Sahel, the Gulf of Guinea and the entire West Africa. Differences in the ensemble projections are linked to the parameterizations employed in both regional models and the influence of this is discussed.
Kumar, Devashish
2016-01-01
Climate models are thought to solve boundary value problems unlike numerical weather prediction, which is an initial value problem. However, climate internal variability (CIV) is thought to be relatively important at near-term (0-30 year) prediction horizons, especially at higher resolutions. The recent availability of significant numbers of multi-model (MME) and multi-initial condition (MICE) ensembles allows for the first time a direct sensitivity analysis of CIV versus model response variability (MRV). Understanding the relative agreement and variability of MME and MICE ensembles for multiple regions, resolutions, and projection horizons is critical for focusing model improvements, diagnostics, and prognosis, as well as impacts, adaptation, and vulnerability studies. Here we find that CIV (MICE agreement) is lower (higher) than MRV (MME agreement) across all spatial resolutions and projection time horizons for both temperature and precipitation. However, CIV dominates MRV over higher latitudes generally an...
Ensemble Bayesian forecasting system Part I: Theory and algorithms
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
Tseng, Yu-heng; Lin, Yen-heng; Lo, Min-hui; Yang, Shu-chih
2016-11-01
The actual dynamics and physical mechanisms affecting the Sahel precipitation pattern and amplitude in the climate models remain under debate due to the inconsistent drying and rainfall variability/pattern among them. We diagnose the boreal summer rainfall pattern in the Sahel and its possible causes using short-range ensemble hindcasts based on NCAR community atmospheric model with the local ensemble transform Kalman filter (CAM-LETKF) data assimilation. The CAM-LETKF assimilation was conducted using 64 ensemble members with an assimilation cycle of 6-h. By comparing the superior and inferior groups within these 64 ensembles, we confirmed the influence of the Atlantic in the West Sahel rainfall (a robust feature in the ensembles) and a severe model bias resulting from erroneously modeled locations and magnitudes of low-level Sahara heat low (SHL) and African easterly jet (AEJ). This bias is highly related to atmospheric jet dynamics as shown in recent studies and local wave instability triggered mainly by the boundary-layer temperature gradient and amplified by land-atmosphere interactions. In particular, our results demonstrated that more accurate divergence and convergence fields resulting from improved SHL and AEJ in the superior groups enabled more accurate rainbelt patterns to be discerned, thus improving the ensemble mean model hindcast prediction by more than 25 % in precipitation and 16 % in temperature. We concluded that the use of low-resolution climate models to project future rainfall in the Sahel requires caution because the model hindcasts may quickly diverge even the same boundary conditions and forcings are applied. The model bias may easily grow up within a few months in the short-range CAM-LETKF hindcast, let along the free model centennial simulations. Unconstrained future climate model projections for the Sahel must more effectively capture the short-term key boundary-layer dynamics in the boreal summer to be credible regardless model dynamics
Tseng, Yu-heng; Lin, Yen-heng; Lo, Min-hui; Yang, Shu-chih
2016-01-01
The actual dynamics and physical mechanisms affecting the Sahel precipitation pattern and amplitude in the climate models remain under debate due to the inconsistent drying and rainfall variability/pattern among them. We diagnose the boreal summer rainfall pattern in the Sahel and its possible causes using short-range ensemble hindcasts based on NCAR community atmospheric model with the local ensemble transform Kalman filter (CAM-LETKF) data assimilation. The CAM-LETKF assimilation was conducted using 64 ensemble members with an assimilation cycle of 6-h. By comparing the superior and inferior groups within these 64 ensembles, we confirmed the influence of the Atlantic in the West Sahel rainfall (a robust feature in the ensembles) and a severe model bias resulting from erroneously modeled locations and magnitudes of low-level Sahara heat low (SHL) and African easterly jet (AEJ). This bias is highly related to atmospheric jet dynamics as shown in recent studies and local wave instability triggered mainly by the boundary-layer temperature gradient and amplified by land-atmosphere interactions. In particular, our results demonstrated that more accurate divergence and convergence fields resulting from improved SHL and AEJ in the superior groups enabled more accurate rainbelt patterns to be discerned, thus improving the ensemble mean model hindcast prediction by more than 25 % in precipitation and 16 % in temperature. We concluded that the use of low-resolution climate models to project future rainfall in the Sahel requires caution because the model hindcasts may quickly diverge even the same boundary conditions and forcings are applied. The model bias may easily grow up within a few months in the short-range CAM-LETKF hindcast, let along the free model centennial simulations. Unconstrained future climate model projections for the Sahel must more effectively capture the short-term key boundary-layer dynamics in the boreal summer to be credible regardless model dynamics
Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.
2009-01-01
This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in
Temperature-controlled molecular depolarization gates in nuclear magnetic resonance
Schroder, Leif; Schroder, Leif; Chavez, Lana; Meldrum, Tyler; Smith, Monica; Lowery, Thomas J.; E. Wemmer, David; Pines, Alexander
2008-02-27
Down the drain: Cryptophane cages in combination with selective radiofrequency spin labeling can be used as molecular 'transpletor' units for transferring depletion of spin polarization from a hyperpolarized 'source' spin ensemble to a 'drain' ensemble. The flow of nuclei through the gate is adjustable by the ambient temperature, thereby enabling controlled consumption of hyperpolarization.
Ensemble Kalman filtering without the intrinsic need for inflation
M. Bocquet
2011-10-01
Full Text Available The main intrinsic source of error in the ensemble Kalman filter (EnKF is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a consequence, often requires inflation and localization. The goal of this article is to derive an ensemble Kalman filter which is less sensitive to sampling errors. A prior probability density function conditional on the forecast ensemble is derived using Bayesian principles. Even though this prior is built upon the assumption that the ensemble is Gaussian-distributed, it is different from the Gaussian probability density function defined by the empirical mean and the empirical error covariance matrix of the ensemble, which is implicitly used in traditional EnKFs. This new prior generates a new class of ensemble Kalman filters, called finite-size ensemble Kalman filter (EnKF-N. One deterministic variant, the finite-size ensemble transform Kalman filter (ETKF-N, is derived. It is tested on the Lorenz '63 and Lorenz '95 models. In this context, ETKF-N is shown to be stable without inflation for ensemble size greater than the model unstable subspace dimension, at the same numerical cost as the ensemble transform Kalman filter (ETKF. One variant of ETKF-N seems to systematically outperform the ETKF with optimally tuned inflation. However it is shown that ETKF-N does not account for all sampling errors, and necessitates localization like any EnKF, whenever the ensemble size is too small. In order to explore the need for inflation in this small ensemble size regime, a local version of the new class of filters is defined (LETKF-N and tested on the Lorenz '95 toy model. Whatever the size of the ensemble, the filter is stable. Its performance without inflation is slightly inferior to that of LETKF with optimally tuned inflation for small interval between updates, and
Atomic Gases at Negative Kinetic Temperature
Mosk, A.P.
2005-01-01
We show that thermalization of the motion of atoms at negative temperature is possible in an optical lattice, for conditions that are feasible in current experiments. We present a method for reversibly inverting the temperature of a trapped gas. Moreover, a negative-temperature ensemble can be coole
A modified iterative ensemble Kalman filter data assimilation method
Xu, Baoxiong; Bai, Yulong; Wang, Yizhao; Li, Zhe; Ma, Boyang
2017-08-01
High nonlinearity is a typical characteristic associated with data assimilation systems. Additionally, iterative ensemble based methods have attracted a large amount of research attention, which has been focused on dealing with nonlinearity problems. To solve the local convergence problem of the iterative ensemble Kalman filter, a modified iterative ensemble Kalman filter algorithm was put forward, which was based on a global convergence strategy from the perspective of a Gauss-Newton iteration. Through self-adaption, the step factor was adjusted to enable every iteration to approach expected values during the process of the data assimilation. A sensitivity experiment was carried out in a low dimensional Lorenz-63 chaotic system, as well as a Lorenz-96 model. The new method was tested via ensemble size, observation variance, and inflation factor changes, along with other aspects. Meanwhile, comparative research was conducted with both a traditional ensemble Kalman filter and an iterative ensemble Kalman filter. The results showed that the modified iterative ensemble Kalman filter algorithm was a data assimilation method that was able to effectively estimate a strongly nonlinear system state.
Progressive freezing of interacting spins in isolated finite magnetic ensembles
Bhattacharya, Kakoli; Dupuis, Veronique; Le-Roy, Damien; Deb, Pritam
2017-02-01
Self-organization of magnetic nanoparticles into secondary nanostructures provides an innovative way for designing functional nanomaterials with novel properties, different from the constituent primary nanoparticles as well as their bulk counterparts. Collective magnetic properties of such complex closed packing of magnetic nanoparticles makes them more appealing than the individual magnetic nanoparticles in many technological applications. This work reports the collective magnetic behaviour of magnetic ensembles comprising of single domain Fe3O4 nanoparticles. The present work reveals that the ensemble formation is based on the re-orientation and attachment of the nanoparticles in an iso-oriented fashion at the mesoscale regime. Comprehensive dc magnetic measurements show the prevalence of strong interparticle interactions in the ensembles. Due to the close range organization of primary Fe3O4 nanoparticles in the ensemble, the spins of the individual nanoparticles interact through dipolar interactions as realized from remnant magnetization measurements. Signature of super spin glass like behaviour in the ensembles is observed in the memory studies carried out in field cooled conditions. Progressive freezing of spins in the ensembles is corroborated from the Vogel-Fulcher fit of the susceptibility data. Dynamic scaling of relaxation reasserted slow spin dynamics substantiating cluster spin glass like behaviour in the ensembles.
Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.
2012-04-01
In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological
Soil texture reclassification by an ensemble model
Cisty, Milan; Hlavcova, Kamila
2015-04-01
a prerequisite for solving some subsequent task, this bias is propagated to the subsequent modelling or other work. Therefore, for the sake of achieving more general and precise outputs while solving such tasks, the authors of the present paper are proposing a hybrid approach, which has the potential for obtaining improved results. Although the authors continue recommending the use of the mentioned parametric PSD models in the proposed methodology, the final prediction is made by an ensemble machine learning algorithm based on regression trees, the so-called Random Forest algorithm, which is built on top of the outputs of such models, which serves as an ensemble members. An improvement in precision was proved, and it is documented in the paper that the ensemble model worked better than any of its constituents. References Nemes, A., Wosten, J.H.M., Lilly, A., Voshaar, J.H.O.: Evaluation of different procedures to interpolate particle-size distributions to achieve compatibility within soil databases. Geoderma 90, 187- 202 (1999) Hwang, S.: Effect of texture on the performance of soil particle-size distribution models. Geoderma 123, 363-371 (2004) Botula, Y.D., Cornelis, W.M., Baert, G., Mafuka, P., Van Ranst, E.: Particle size distribution models for soils of the humid tropics. J Soils Sediments. 13, 686-698 (2013)
Arctic Sea Ice Simulation in the PlioMIP Ensemble
Howell, Fergus W.; Haywood, Alan M.; Otto-Bliesner, Bette L.; Bragg, Fran; Chan, Wing-Le; Chandler, Mark A.; Contoux, Camille; Kamae, Youichi; Abe-Ouchi, Ayako; Rosenbloom, Nan A.; Stepanek, Christian; Zhang, Zhongshi
2016-01-01
Eight general circulation models have simulated the mid-Pliocene warm period (mid-Pliocene, 3.264 to 3.025 Ma) as part of the Pliocene Modelling Intercomparison Project (PlioMIP). Here, we analyse and compare their simulation of Arctic sea ice for both the pre-industrial period and the mid-Pliocene. Mid-Pliocene sea ice thickness and extent is reduced, and the model spread of extent is more than twice the pre-industrial spread in some summer months. Half of the PlioMIP models simulate ice-free conditions in the mid-Pliocene. This spread amongst the ensemble is in line with the uncertainties amongst proxy reconstructions for mid-Pliocene sea ice extent. Correlations between mid-Pliocene Arctic temperatures and sea ice extents are almost twice as strong as the equivalent correlations for the pre-industrial simulations. The need for more comprehensive sea ice proxy data is highlighted, in order to better compare model performances.
Ensemble Deep Learning for Biomedical Time Series Classification
Lin-peng Jin
2016-01-01
Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Ensemble Deep Learning for Biomedical Time Series Classification.
Jin, Lin-Peng; Dong, Jun
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique
SHI Yan; HUANG Cong-ming
2006-01-01
An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.
Deterministic entanglement of Rydberg ensembles by engineered dissipation
Dasari, Durga; Mølmer, Klaus
2014-01-01
We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can...... be driven towards a dark steady state that entangles all atoms. The long-range resonant dipole-dipole interaction between different Rydberg states extends the entanglement beyond the van der Walls interaction range with perspectives for entangling large and distant ensembles....
Large margin classifier-based ensemble tracking
Wang, Yuru; Liu, Qiaoyuan; Yin, Minghao; Wang, ShengSheng
2016-07-01
In recent years, many studies consider visual tracking as a two-class classification problem. The key problem is to construct a classifier with sufficient accuracy in distinguishing the target from its background and sufficient generalize ability in handling new frames. However, the variable tracking conditions challenges the existing methods. The difficulty mainly comes from the confused boundary between the foreground and background. This paper handles this difficulty by generalizing the classifier's learning step. By introducing the distribution data of samples, the classifier learns more essential characteristics in discriminating the two classes. Specifically, the samples are represented in a multiscale visual model. For features with different scales, several large margin distribution machine (LDMs) with adaptive kernels are combined in a Baysian way as a strong classifier. Where, in order to improve the accuracy and generalization ability, not only the margin distance but also the sample distribution is optimized in the learning step. Comprehensive experiments are performed on several challenging video sequences, through parameter analysis and field comparison, the proposed LDM combined ensemble tracker is demonstrated to perform with sufficient accuracy and generalize ability in handling various typical tracking difficulties.
Model error estimation in ensemble data assimilation
S. Gillijns
2007-01-01
Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.
Variety of synchronous regimes in neuronal ensembles
Komarov, M. A.; Osipov, G. V.; Suykens, J. A. K.
2008-09-01
We consider a Hodgkin-Huxley-type model of oscillatory activity in neurons of the snail Helix pomatia. This model has a distinctive feature: It demonstrates multistability in oscillatory and silent modes that is typical for the thalamocortical neurons. A single neuron cell can demonstrate a variety of oscillatory activity: Regular and chaotic spiking and bursting behavior. We study collective phenomena in small and large arrays of nonidentical cells coupled by models of electrical and chemical synapses. Two single elements coupled by electrical coupling show different types of synchronous behavior, in particular in-phase and antiphase synchronous regimes. In an ensemble of three inhibitory synaptically coupled elements, the phenomenon of sequential synchronous dynamics is observed. We study the synchronization phenomena in the chain of nonidentical neurons at different oscillatory behavior coupled with electrical and chemical synapses. Various regimes of phase synchronization are observed: (i) Synchronous regular and chaotic spiking; (ii) synchronous regular and chaotic bursting; and (iii) synchronous regular and chaotic bursting with different numbers of spikes inside the bursts. We detect and study the effect of collective synchronous burst generation due to the cluster formation and the oscillatory death.
General approaches in ensemble quantum computing
V Vimalan; N Chandrakumar
2008-01-01
We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.
Ensemble LUT classification for degraded document enhancement
Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir
2008-01-01
The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1
Group Theory for Embedded Random Matrix Ensembles
Kota, V K B
2014-01-01
Embedded random matrix ensembles are generic models for describing statistical properties of finite isolated quantum many-particle systems. For the simplest spinless fermion (or boson) systems with say $m$ fermions (or bosons) in $N$ single particle states and interacting with say $k$-body interactions, we have EGUE($k$) [embedded GUE of $k$-body interactions) with GUE embedding and the embedding algebra is $U(N)$. In this paper, using EGUE($k$) representation for a Hamiltonian that is $k$-body and an independent EGUE($t$) representation for a transition operator that is $t$-body and employing the embedding $U(N)$ algebra, finite-$N$ formulas for moments up to order four are derived, for the first time, for the transition strength densities (transition strengths multiplied by the density of states at the initial and final energies). In the asymptotic limit, these formulas reduce to those derived for the EGOE version and establish that in general bivariate transition strength densities take bivariate Gaussian ...
Emergent order in ensembles of active spinners
van Zuiden, Benjamin C.; Paulose, Jayson; Irvine, William T. M.; Bartolo, Denis; Vitelli, Vincenzo
Interacting self-propelled particles is proxy to model many living systems from cytoskeletal motors to bird flocks, while also providing a framework to investigate fundamental questions in non equilibrium statistical mechanics. A surge of recent studies have shown that self-propulsion significantly modifies the phase behavior of particles interacting via potential interactions. A prototypical example is the so-called Motility Induced Phase Separation occurring in ensembles of self-propelled hard spheres. In stark contrast, our understanding of active spinning, as opposed to self-propulsion, remains very scarce. Here, we study a system of self-spinning dimers, interacting via soft repulsive forces. Upon varying the density and activity, we observe a range of emergent phases characterized by different degrees of spatiotemporal order in the position and orientation of the dimers. Changes in bulk properties, including crystallization, melting, and freezing, are reflected in the collective motion of the particles. We rationalize our numerical findings theoretically and demonstrate some of these concepts in a active granular experiment.
Ensemble Kalman filtering with residual nudging
Luo, Xiaodong; 10.3402/tellusa.v64i0.17130
2012-01-01
Covariance inflation and localization are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/o...
Orchestrating Distributed Resource Ensembles for Petascale Science
Baldin, Ilya; Mandal, Anirban; Ruth, Paul; Yufeng, Xin
2014-04-24
Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstract API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.
Quantum metrology with cold atomic ensembles
Mitchell Morgan W.
2013-08-01
Full Text Available Quantum metrology uses quantum features such as entanglement and squeezing to improve the sensitivity of quantum-limited measurements. Long established as a valuable technique in optical measurements such as gravitational-wave detection, quantum metrology is increasingly being applied to atomic instruments such as matter-wave interferometers, atomic clocks, and atomic magnetometers. Several of these new applications involve dual optical/atomic quantum systems, presenting both new challenges and new opportunities. Here we describe an optical magnetometry system that achieves both shot-noise-limited and projection-noise-limited performance, allowing study of optical magnetometry in a fully-quantum regime [1]. By near-resonant Faraday rotation probing, we demonstrate measurement-based spin squeezing in a magnetically-sensitive atomic ensemble [2-4]. The versatility of this system allows us also to design metrologically-relevant optical nonlinearities, and to perform quantum-noise-limited measurements with interacting photons. As a first interaction-based measurement [5], we implement a non-linear metrology scheme proposed by Boixo et al. with the surprising feature of precision scaling better than the 1/N “Heisenberg limit” [6].
Evaluation of seasonal ensemble forecasts in Norway
Tore Sinnes, Svein; Engeland, Kolbjørn; Langsholt, Elin; Roar Sælthun, Nils
2017-04-01
Throughout the winter and spring season, seasonal forecasts are used by the Norwegian Water Resources and Energy Directorate (NVE) in order to assess the probability for sever floods or for low seasonal runoff volumes. The latter is especially important for hydropower production. The seasonal forecasts are generated by a set of 145 lumped, elevation distributed HBV models distributed all over Norway. The observed weather is used to establish the initial snow cover, soil moisture and groundwater levels in the HBV model. Subsequently, scenarios are created by using time series of observed weather the previous 50 years, creating a total of 50 ensembles. The predictability of this seasonal forecasting system depends therefore on the importance of the initial conditions, and in Norway the seasonal snow cover is especially important. The aim of this study is to evaluate the performance of the seasonal forecasts of flood peaks and seasonal runoff volumes and especially to evaluate of the predictability depends on (i) catchment climatology and (ii) issue dates and lead times. For achieving these aims, evaluation criterions assessing reliability and sharpness were used. The results shows that the predictability is the highest for catchments where the spring runoff is dominated by snow melt. The predictability is the highest for the shortest lead times (up to 1 months ahead).The predictive performance is higher for runoff volumes than for the flood peaks.
G. Thirel
2010-08-01
Full Text Available The use of ensemble streamflow forecasts is developing in the international flood forecasting services. Ensemble streamflow forecast systems can provide more accurate forecasts and useful information about the uncertainty of the forecasts, thus improving the assessment of risks. Nevertheless, these systems, like all hydrological forecasts, suffer from errors on initialization or on meteorological data, which lead to hydrological prediction errors. This article, which is the second part of a 2-part article, concerns the impacts of initial states, improved by a streamflow assimilation system, on an ensemble streamflow prediction system over France. An assimilation system was implemented to improve the streamflow analysis of the SAFRAN-ISBA-MODCOU (SIM hydro-meteorological suite, which initializes the ensemble streamflow forecasts at Météo-France. This assimilation system, using the Best Linear Unbiased Estimator (BLUE and modifying the initial soil moisture states, showed an improvement of the streamflow analysis with low soil moisture increments. The final states of this suite were used to initialize the ensemble streamflow forecasts of Météo-France, which are based on the SIM model and use the European Centre for Medium-range Weather Forecasts (ECMWF 10-day Ensemble Prediction System (EPS. Two different configurations of the assimilation system were used in this study: the first with the classical SIM model and the second using improved soil physics in ISBA. The effects of the assimilation system on the ensemble streamflow forecasts were assessed for these two configurations, and a comparison was made with the original (i.e. without data assimilation and without the improved physics ensemble streamflow forecasts. It is shown that the assimilation system improved most of the statistical scores usually computed for the validation of ensemble predictions (RMSE, Brier Skill Score and its decomposition, Ranked Probability Skill Score, False Alarm
Multimodel SuperEnsemble technique for quantitative precipitation forecasts in Piemonte region
D. Cane
2010-02-01
Full Text Available The Multimodel SuperEnsemble technique is a powerful post-processing method for the estimation of weather forecast parameters reducing direct model output errors. It has been applied to real time NWP, TRMM-SSM/I based multi-analysis, Seasonal Climate Forecasts and Hurricane Forecasts. The novelty of this approach lies in the methodology, which differs from ensemble analysis techniques used elsewhere.
Several model outputs are put together with adequate weights to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure, the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts, involving a new accurate statistical method for bias correction and a wide spectrum of results over Piemonte very dense non-GTS weather station network.
Ensemble-based Regional Climate Prediction: Political Impacts
Miguel, E.; Dykema, J.; Satyanath, S.; Anderson, J. G.
2008-12-01
Accurate forecasts of regional climate, including temperature and precipitation, have significant implications for human activities, not just economically but socially. Sub Saharan Africa is a region that has displayed an exceptional propensity for devastating civil wars. Recent research in political economy has revealed a strong statistical relationship between year to year fluctuations in precipitation and civil conflict in this region in the 1980s and 1990s. To investigate how climate change may modify the regional risk of civil conflict in the future requires a probabilistic regional forecast that explicitly accounts for the community's uncertainty in the evolution of rainfall under anthropogenic forcing. We approach the regional climate prediction aspect of this question through the application of a recently demonstrated method called generalized scalar prediction (Leroy et al. 2009), which predicts arbitrary scalar quantities of the climate system. This prediction method can predict change in any variable or linear combination of variables of the climate system averaged over a wide range spatial scales, from regional to hemispheric to global. Generalized scalar prediction utilizes an ensemble of model predictions to represent the community's uncertainty range in climate modeling in combination with a timeseries of any type of observational data that exhibits sensitivity to the scalar of interest. It is not necessary to prioritize models in deriving with the final prediction. We present the results of the application of generalized scalar prediction for regional forecasts of temperature and precipitation and Sub Saharan Africa. We utilize the climate predictions along with the established statistical relationship between year-to-year rainfall variability in Sub Saharan Africa to investigate the potential impact of climate change on civil conflict within that region.
Ensemble data assimilation in the Whole Atmosphere Community Climate Model
Pedatella, N. M.; Raeder, K.; Anderson, J. L.; Liu, H.-L.
2014-08-01
We present results pertaining to the assimilation of real lower, middle, and upper atmosphere observations in the Whole Atmosphere Community Climate Model (WACCM) using the Data Assimilation Research Testbed (DART) ensemble adjustment Kalman filter. The ability to assimilate lower atmosphere observations of aircraft and radiosonde temperature and winds, satellite drift winds, and Constellation Observing System for Meteorology, Ionosphere, and Climate refractivity along with middle/upper atmosphere temperature observations from SABER and Aura MLS is demonstrated. The WACCM+DART data assimilation system is shown to be able to reproduce the salient features, and variability, of the troposphere present in the National Centers for Environmental Prediction/National Center for Atmospheric Research Re-Analysis. In the mesosphere, the fit of WACCM+DART to observations is found to be slightly worse when only lower atmosphere observations are assimilated compared to a control experiment that is reflective of the model climatological variability. This differs from previous results which found that assimilation of lower atmosphere observations improves the fit to mesospheric observations. This discrepancy is attributed to the fact that due to the gravity wave drag parameterizations, the model climatology differs significantly from the observations in the mesosphere, and this is not corrected by the assimilation of lower atmosphere observations. The fit of WACCM+DART to mesospheric observations is, however, significantly improved compared to the control experiment when middle/upper atmosphere observations are assimilated. We find that assimilating SABER observations reduces the root-mean-square error and bias of WACCM+DART relative to the independent Aura MLS observations by ˜50%, demonstrating that assimilation of middle/upper atmosphere observations is essential for accurate specification of the mesosphere and lower thermosphere region in WACCM+DART. Last, we demonstrate that
Infinite ensemble of support vector machines for prediction of ...
user
Many researchers have demonstrated the use of artificial neural networks (ANNs) to ..... Following section discusses the effect of infinite ensemble approach ..... major problem with artificial intelligence-based modeling approaches is their ...
An educational model for ensemble streamflow simulation and uncertainty analysis
A. AghaKouchak
2012-06-01
Full Text Available This paper presents a hands-on modeling toolbox, HBV-Ensemble, designed as a complement to theoretical hydrology lectures, to teach hydrological processes and their uncertainties. The HBV-Ensemble can be used for in-class lab practices and homework assignments, and assessment of students' understanding of hydrological processes. Using this model, students can gain more insights into how hydrological processes (e.g., precipitation, snowmelt and snow accumulation, soil moisture, evapotranspiration and runoff generation are interconnected. The model includes a MATLAB Graphical User Interface (GUI and an ensemble simulation scheme that can be used for not only hydrological processes, but also for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity.
Relation between native ensembles and experimental structures of proteins
Best, R. B.; Lindorff-Larsen, Kresten; DePristo, M. A.
2006-01-01
Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest......Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of "high-sequence similarity Protein Data Bank" (HSP) structures and consider the extent to which such ensembles represent the structural...... heterogeneity of the native state in solution. We find that different NMR measurements probing structure and dynamics of given proteins in solution, including order parameters, scalar couplings, and residual dipolar couplings, are remarkably well reproduced by their respective high-sequence similarity Protein...
Spectroscopic properties of inhomogeneously broadened spin ensembles in a cavity
Kurucz, Zoltan; Wesenberg, Janus; Mølmer, Klaus
2011-01-01
The enhanced collective coupling to weak quantum fields may turn atomic or spin ensembles into an important component in quantum information processing architectures. Inhomogeneous broadening can, however, significantly reduce the coupling and the lifetime of the collective excitation...
Trace formula for an ensemble of bumpy billiards
Pavloff, N
1995-01-01
We study the semiclassical quantization of an ensemble of billiards with a small random shape deformation. We derive a trace formula averaged over shape disorder. The results are illustrated by the study of supershells in rough metal clusters.
Time and ensemble averaging in time series analysis
Latka, Miroslaw; Jernajczyk, Wojciech; West, Bruce J
2010-01-01
In many applications expectation values are calculated by partitioning a single experimental time series into an ensemble of data segments of equal length. Such single trajectory ensemble (STE) is a counterpart to a multiple trajectory ensemble (MTE) used whenever independent measurements or realizations of a stochastic process are available. The equivalence of STE and MTE for stationary systems was postulated by Wang and Uhlenbeck in their classic paper on Brownian motion (Rev. Mod. Phys. 17, 323 (1945)) but surprisingly has not yet been proved. Using the stationary and ergodic paradigm of statistical physics -- the Ornstein-Uhlenbeck (OU) Langevin equation, we revisit Wang and Uhlenbeck's postulate. In particular, we find that the variance of the solution of this equation is different for these two ensembles. While the variance calculated using the MTE quantifies the spreading of independent trajectories originating from the same initial point, the variance for STE measures the spreading of two correlated r...
Phase-selective entrainment of nonlinear oscillator ensembles
Zlotnik, Anatoly; Nagao, Raphael; Kiss, István Z.; Li-Shin, Jr.
2016-03-01
The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups into spatiotemporal patterns with multiple phase clusters. The experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.
Ensembles on configuration space classical, quantum, and beyond
Hall, Michael J W
2016-01-01
This book describes a promising approach to problems in the foundations of quantum mechanics, including the measurement problem. The dynamics of ensembles on configuration space is shown here to be a valuable tool for unifying the formalisms of classical and quantum mechanics, for deriving and extending the latter in various ways, and for addressing the quantum measurement problem. A description of physical systems by means of ensembles on configuration space can be introduced at a very fundamental level: the basic building blocks are a configuration space, probabilities, and Hamiltonian equations of motion for the probabilities. The formalism can describe both classical and quantum systems, and their thermodynamics, with the main difference being the choice of ensemble Hamiltonian. Furthermore, there is a natural way of introducing ensemble Hamiltonians that describe the evolution of hybrid systems; i.e., interacting systems that have distinct classical and quantum sectors, allowing for consistent descriptio...
Ensemble vs. time averages in financial time series analysis
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.
Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel
2017-06-01
Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.
Puibasset, Joël
2005-04-01
The effect of confinement on phase behavior of simple fluids is still an area of intensive research. In between experiment and theory, molecular simulation is a powerful tool to study the effect of confinement in realistic porous materials, containing some disorder. Previous simulation works aiming at establishing the phase diagram of a confined Lennard-Jones-type fluid, concentrated on simple pore geometries (slits or cylinders). The development of the Gibbs ensemble Monte Carlo technique by Panagiotopoulos [Mol. Phys. 61, 813 (1987)], greatly favored the study of such simple geometries for two reasons. First, the technique is very efficient to calculate the phase diagram, since each run (at a given temperature) converges directly to an equilibrium between a gaslike and a liquidlike phase. Second, due to volume exchange procedure between the two phases, at least one invariant direction of space is required for applicability of this method, which is the case for slits or cylinders. Generally, the introduction of some disorder in such simple pores breaks the initial invariance in one of the space directions and prevents to work in the Gibbs ensemble. The simulation techniques for such disordered systems are numerous (grand canonical Monte Carlo, molecular dynamics, histogram reweighting, N-P-T+test method, Gibbs-Duhem integration procedure, etc.). However, the Gibbs ensemble technique, which gives directly the coexistence between phases, was never generalized to such systems. In this work, we focus on two weakly disordered pores for which a modified Gibbs ensemble Monte Carlo technique can be applied. One of the pores is geometrically undulated, whereas the second is cylindrical but presents a chemical variation which gives rise to a modulation of the wall potential. In the first case almost no change in the phase diagram is observed, whereas in the second strong modifications are reported.
National Weather Service (NWS) Implementation of the Hydrologic Ensemble Forecast Service
Hartman, R. K.; Fresch, M. A.; Wells, E.
2015-12-01
Operational hydrologic forecasters as well as the communities that they serve have long recognized the value of including uncertainty in hydrologic projections. While single value (deterministic) forecasts are easy to understand and link to specific mitigation actions, the potential for using modern risk management strategies is very limited. This is particularly evident at lead times beyond a few days when forecast skill may be low but the value (and costs) of mitigation actions may be quite high. Based on nearly ten years of research and development, the NWS's National Water Center (NWC, formerly the Office of Hydrologic Development) implemented and evaluated the Hydrologic Ensemble Forecast Service (HEFS, see Demargne et al. 2014 Brown et al., 2013, Brown et al., 2014a/b/c). The HEFS provides hydrologic forecasts that reflect the total uncertainty, including that contributed by the meteorological forcing and the hydrologic modeling. The HEFS leverages the skill in weather and climate forecasts to produce ensemble forecasts of precipitation, temperature and streamflow at forecast lead times ranging from one hour to one year. The resulting ensembles represent a rich dataset from which a wide variety of risk-based decision support information can be derived. The NWS River Forecast Centers (RFCs) are starting to incorporate the Hydrologic Ensemble Forecast Service (HEFS,) into their routine operations. In 2012, five (of thirteen) RFCs began running and testing HEFS in an experimental mode. In 2015, HEFS was deployed (including training and software support) to the eight remaining RFCs. Currently, all RFCs are running the HEFS every day in real-time for an increasing number of forecast locations. Eventually, forecasts from the HEFS will be integrated into the warning/hazard services at the NWS Weather Forecast Offices (WFOs). This contribution describes the HEFS framework, the development and deployment strategy, and the operational plans for HEFS going forward.
An automated approach to network features of protein structure ensembles.
Bhattacharyya, Moitrayee; Bhat, Chanda R; Vishveshwara, Saraswathi
2013-10-01
Network theory applied to protein structures provides insights into numerous problems of biological relevance. The explosion in structural data available from PDB and simulations establishes a need to introduce a standalone-efficient program that assembles network concepts/parameters under one hood in an automated manner. Herein, we discuss the development/application of an exhaustive, user-friendly, standalone program package named PSN-Ensemble, which can handle structural ensembles generated through molecular dynamics (MD) simulation/NMR studies or from multiple X-ray structures. The novelty in network construction lies in the explicit consideration of side-chain interactions among amino acids. The program evaluates network parameters dealing with topological organization and long-range allosteric communication. The introduction of a flexible weighing scheme in terms of residue pairwise cross-correlation/interaction energy in PSN-Ensemble brings in dynamical/chemical knowledge into the network representation. Also, the results are mapped on a graphical display of the structure, allowing an easy access of network analysis to a general biological community. The potential of PSN-Ensemble toward examining structural ensemble is exemplified using MD trajectories of an ubiquitin-conjugating enzyme (UbcH5b). Furthermore, insights derived from network parameters evaluated using PSN-Ensemble for single-static structures of active/inactive states of β2-adrenergic receptor and the ternary tRNA complexes of tyrosyl tRNA synthetases (from organisms across kingdoms) are discussed. PSN-Ensemble is freely available from http://vishgraph.mbu.iisc.ernet.in/PSN-Ensemble/psn_index.html.
Clustering Categorical Data:A Cluster Ensemble Approach
He Zengyou(何增友); Xu Xiaofei; Deng Shengchun
2003-01-01
Clustering categorical data, an integral part of data mining,has attracted much attention recently. In this paper, the authors formally define the categorical data clustering problem as an optimization problem from the viewpoint of cluster ensemble, and apply cluster ensemble approach for clustering categorical data. Experimental results on real datasets show that better clustering accuracy can be obtained by comparing with existing categorical data clustering algorithms.
Ensemble control of linear systems with parameter uncertainties
Kou, Kit Ian; Liu, Yang; Zhang, Dandan; Tu, Yanshuai
2016-07-01
In this paper, we study the optimal control problem for a class of four-dimensional linear systems based on quaternionic and Fourier analysis. When the control is unconstrained, the optimal ensemble controller for this linear ensemble control systems is given in terms of prolate spheroidal wave functions. For the constrained convex optimisation problem of such systems, the quadratic programming is presented to obtain the optimal control laws. Simulations are given to verity the effectiveness of the proposed theory.
Pycobra: A Python Toolbox for Ensemble Learning and Visualisation
Guedj, Benjamin; Srinivasa Desikan, Bhargav
2017-01-01
We introduce \\texttt{pycobra}, a Python library devoted to ensemble learning (regression and classification) and visualisation. Its main assets are the implementation of several ensemble learning algorithms, a flexible and generic interface to compare and blend any existing machine learning algorithm available in Python libraries (as long as a \\texttt{predict} method is given), and visualisation tools such as Voronoi tessellations. \\texttt{pycobra} is fully \\texttt{scikit-learn} compatible an...
Ensembles of probability estimation trees for customer churn prediction
2010-01-01
Customer churn prediction is one of the most, important elements tents of a company's Customer Relationship Management, (CRM) strategy In tins study, two strategies are investigated to increase the lift. performance of ensemble classification models, i.e (1) using probability estimation trees (PETs) instead of standard decision trees as base classifiers; and (n) implementing alternative fusion rules based on lift weights lot the combination of ensemble member's outputs Experiments ale conduct...
Evolutionary Ensemble for In Silico Prediction of Ames Test Mutagenicity
Chen, Huanhuan; Yao, Xin
Driven by new regulations and animal welfare, the need to develop in silico models has increased recently as alternative approaches to safety assessment of chemicals without animal testing. This paper describes a novel machine learning ensemble approach to building an in silico model for the prediction of the Ames test mutagenicity, one of a battery of the most commonly used experimental in vitro and in vivo genotoxicity tests for safety evaluation of chemicals. Evolutionary random neural ensemble with negative correlation learning (ERNE) [1] was developed based on neural networks and evolutionary algorithms. ERNE combines the method of bootstrap sampling on training data with the method of random subspace feature selection to ensure diversity in creating individuals within an initial ensemble. Furthermore, while evolving individuals within the ensemble, it makes use of the negative correlation learning, enabling individual NNs to be trained as accurate as possible while still manage to maintain them as diverse as possible. Therefore, the resulting individuals in the final ensemble are capable of cooperating collectively to achieve better generalization of prediction. The empirical experiment suggest that ERNE is an effective ensemble approach for predicting the Ames test mutagenicity of chemicals.
SVM and SVM Ensembles in Breast Cancer Prediction
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807
SVM and SVM Ensembles in Breast Cancer Prediction.
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.
Selecting, weeding, and weighting biased climate model ensembles
Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.
2012-12-01
In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.
Genetic Programming Based Ensemble System for Microarray Data Classification
Kun-Hong Liu
2015-01-01
Full Text Available Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP based new ensemble system (named GPES, which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.
Knowledge based cluster ensemble for cancer discovery from biomolecular data.
Yu, Zhiwen; Wongb, Hau-San; You, Jane; Yang, Qinmin; Liao, Hongying
2011-06-01
The adoption of microarray techniques in biological and medical research provides a new way for cancer diagnosis and treatment. In order to perform successful diagnosis and treatment of cancer, discovering and classifying cancer types correctly is essential. Class discovery is one of the most important tasks in cancer classification using biomolecular data. Most of the existing works adopt single clustering algorithms to perform class discovery from biomolecular data. However, single clustering algorithms have limitations, which include a lack of robustness, stability, and accuracy. In this paper, we propose a new cluster ensemble approach called knowledge based cluster ensemble (KCE) which incorporates the prior knowledge of the data sets into the cluster ensemble framework. Specifically, KCE represents the prior knowledge of a data set in the form of pairwise constraints. Then, the spectral clustering algorithm (SC) is adopted to generate a set of clustering solutions. Next, KCE transforms pairwise constraints into confidence factors for these clustering solutions. After that, a consensus matrix is constructed by considering all the clustering solutions and their corresponding confidence factors. The final clustering result is obtained by partitioning the consensus matrix. Comparison with single clustering algorithms and conventional cluster ensemble approaches, knowledge based cluster ensemble approaches are more robust, stable and accurate. The experiments on cancer data sets show that: 1) KCE works well on these data sets; 2) KCE not only outperforms most of the state-of-the-art single clustering algorithms, but also outperforms most of the state-of-the-art cluster ensemble approaches.
Hybrid Intrusion Detection Using Ensemble of Classification Methods
M.Govindarajan
2014-01-01
Full Text Available One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed for homogeneous ensemble classifiers using bagging and heterogeneous ensemble classifiers using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF and Support Vector Machine (SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of real and benchmark data sets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for real and benchmark data sets of intrusion detection. The accuracy of base classifiers is compared with homogeneous and heterogeneous models for data mining problem. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and also heterogeneous models exhibit better results than homogeneous models for real and benchmark data sets of intrusion detection.
Three-model ensemble wind prediction in southern Italy
Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo
2016-03-01
Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.
Ensembles of a small number of conformations with relative populations
Vammi, Vijay, E-mail: vsvammi@iastate.edu; Song, Guang, E-mail: gsong@iastate.edu [Iowa State University, Bioinformatics and Computational Biology Program, Department of Computer Science (United States)
2015-12-15
In our previous work, we proposed a new way to represent protein native states, using ensembles of a small number of conformations with relative Populations, or ESP in short. Using Ubiquitin as an example, we showed that using a small number of conformations could greatly reduce the potential of overfitting and assigning relative populations to protein ensembles could significantly improve their quality. To demonstrate that ESP indeed is an excellent alternative to represent protein native states, in this work we compare the quality of two ESP ensembles of Ubiquitin with several well-known regular ensembles or average structure representations. Extensive amount of significant experimental data are employed to achieve a thorough assessment. Our results demonstrate that ESP ensembles, though much smaller in size comparing to regular ensembles, perform equally or even better sometimes in all four different types of experimental data used in the assessment, namely, the residual dipolar couplings, residual chemical shift anisotropy, hydrogen exchange rates, and solution scattering profiles. This work further underlines the significance of having relative populations in describing the native states.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast
Jinyin Ye
2016-01-01
Full Text Available TIGGE (THORPEX International Grand Global Ensemble was a major part of the THORPEX (Observing System Research and Predictability Experiment. It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability forecast, extend the lead time of the flood forecast, and gain more time for decision-makers to make the right decision. In this study, precipitation ensemble forecast products from ECMWF, NCEP, and CMA are used to drive distributed hydrologic model TOPX. We focus on Yi River catchment and aim to build a flood forecast and early warning system. The results show that the meteorologic-hydrologic coupled model can satisfactorily predict the flow-process of four flood events. The predicted occurrence time of peak discharges is close to the observations. However, the magnitude of the peak discharges is significantly different due to various performances of the ensemble prediction systems. The coupled forecasting model can accurately predict occurrence of the peak time and the corresponding risk probability of peak discharge based on the probability distribution of peak time and flood warning, which can provide users a strong theoretical foundation and valuable information as a promising new approach.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms
Rapakoulia, Trisevgeni
2014-04-26
Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.
E. Crestani
2013-04-01
Full Text Available Estimating the spatial variability of hydraulic conductivity K in natural aquifers is important for predicting the transport of dissolved compounds. Especially in the nonreactive case, the plume evolution is mainly controlled by the heterogeneity of K. At the local scale, the spatial distribution of K can be inferred by combining the Lagrangian formulation of the transport with a Kalman-filter-based technique and assimilating a sequence of time-lapse concentration C measurements, which, for example, can be evaluated on site through the application of a geophysical method. The objective of this work is to compare the ensemble Kalman filter (EnKF and the ensemble smoother (ES capabilities to retrieve the hydraulic conductivity spatial distribution in a groundwater flow and transport modeling framework. The application refers to a two-dimensional synthetic aquifer in which a tracer test is simulated. Moreover, since Kalman-filter-based methods are optimal only if each of the involved variables fit to a Gaussian probability density function (pdf and since this condition may not be met by some of the flow and transport state variables, issues related to the non-Gaussianity of the variables are analyzed and different transformation of the pdfs are considered in order to evaluate their influence on the performance of the methods. The results show that the EnKF reproduces with good accuracy the hydraulic conductivity field, outperforming the ES regardless of the pdf of the concentrations.
On evaluation of ensemble precipitation forecasts with observation-based ensembles
S. Jaun
2007-04-01
Full Text Available Spatial interpolation of precipitation data is uncertain. How important is this uncertainty and how can it be considered in evaluation of high-resolution probabilistic precipitation forecasts? These questions are discussed by experimental evaluation of the COSMO consortium's limited-area ensemble prediction system COSMO-LEPS. The applied performance measure is the often used Brier skill score (BSS. The observational references in the evaluation are (a analyzed rain gauge data by ordinary Kriging and (b ensembles of interpolated rain gauge data by stochastic simulation. This permits the consideration of either a deterministic reference (the event is observed or not with 100% certainty or a probabilistic reference that makes allowance for uncertainties in spatial averaging. The evaluation experiments show that the evaluation uncertainties are substantial even for the large area (41 300 km2 of Switzerland with a mean rain gauge distance as good as 7 km: the one- to three-day precipitation forecasts have skill decreasing with forecast lead time but the one- and two-day forecast performances differ not significantly.
Vanuytrecht, E.; Raes, D.; Willems, P.; Semenov, M.
2012-04-01
Global Circulation Models (GCMs) are sophisticated tools to study the future evolution of the climate. Yet, the coarse scale of GCMs of hundreds of kilometers raises questions about the suitability for agricultural impact assessments. These assessments are often made at field level and require consideration of interactions at sub-GCM grid scale (e.g., elevation-dependent climatic changes). Regional climate models (RCMs) were developed to provide climate projections at a spatial scale of 25-50 km for limited regions, e.g. Europe (Giorgi and Mearns, 1991). Climate projections from GCMs or RCMs are available as multi-model ensembles. These ensembles are based on large data sets of simulations produced by modelling groups worldwide, who performed a set of coordinated climate experiments in which climate models were run for a common set of experiments and various emissions scenarios (Knutti et al., 2010). The use of multi-model ensembles in climate change studies is an important step in quantifying uncertainty in impact predictions, which will underpin more informed decisions for adaptation and mitigation to changing climate (Semenov and Stratonovitch, 2010). The objective of our study was to evaluate the effect of the spatial scale of climate projections on climate change impacts for cereals in Belgium. Climate scenarios were based on two multi-model ensembles, one comprising 15 GCMs of the Coupled Model Intercomparison Project phase 3 (CMIP3; Meehl et al., 2007) with spatial resolution of 200-300 km, the other comprising 9 RCMs of the EU-ENSEMBLES project (van der Linden and Mitchell, 2009) with spatial resolution of 25 km. To be useful for agricultural impact assessments, the projections of GCMs and RCMs were downscaled to the field level. Long series (240 cropping seasons) of local-scale climate scenarios were generated by the LARS-WG weather generator (Semenov et al., 2010) via statistical inference. Crop growth and development were simulated with the Aqua
G. Thirel
2010-04-01
Full Text Available The use of ensemble streamflow forecasts is developing in the international flood forecasting services. Such systems can provide more accurate forecasts and useful information about the uncertainty of the forecasts, thus improving the assessment of risks. Nevertheless, these systems, like all hydrological forecasts, suffer from errors on initialization or on meteorological data, which lead to hydrological prediction errors. This article, which is the second part of a 2-part article, concerns the impacts of initial states, improved by a streamflow assimilation system, on an ensemble streamflow prediction system over France. An assimilation system was implemented to improve the streamflow analysis of the SAFRAN-ISBA-MODCOU (SIM hydro-meteorological suite, which initializes the ensemble streamflow forecasts at Météo-France. This assimilation system, using the Best Linear Unbiased Estimator (BLUE and modifying the initial soil moisture states, showed an improvement of the streamflow analysis with low soil moisture increments. The final states of this suite were used to initialize the ensemble streamflow forecasts of Météo-France, which are based on the SIM model and use the European Centre for Medium-range Weather Forecasts (ECMWF 10-day Ensemble Prediction System (EPS. Two different configurations of the assimilation system were used in this study: the first with the classical SIM model and the second using improved soil physics in ISBA. The effects of the assimilation system on the ensemble streamflow forecasts were assessed for these two configurations, and a comparison was made with the original (i.e. without data assimilation and without the improved physics ensemble streamflow forecasts. It is shown that the assimilation system improved most of the statistical scores usually computed for the validation of ensemble predictions (RMSE, Brier Skill Score and its decomposition, Ranked Probability Skill Score, False Alarm Rate, etc., especially
Nogawa, Tomoaki
2011-12-05
The evaporation-condensation transition of the Potts model on a square lattice is numerically investigated by the Wang-Landau sampling method. An intrinsically system-size-dependent discrete transition between supersaturation state and phase-separation state is observed in the microcanonical ensemble by changing constrained internal energy. We calculate the microcanonical temperature, as a derivative of microcanonical entropy, and condensation ratio, and perform a finite-size scaling of them to indicate the clear tendency of numerical data to converge to the infinite-size limit predicted by phenomenological theory for the isotherm lattice gas model. © 2011 American Physical Society.
Observation of the fcc-to-hcp transition in ensembles of argon nanoclusters.
Krainyukova, N V; Boltnev, R E; Bernard, E P; Khmelenko, V V; Lee, D M; Kiryukhin, V
2012-12-14
Macroscopic ensembles of weakly interacting argon nanoclusters are studied using x-ray diffraction in low vacuum. As the clusters grow by fusion with increasing temperature, their structure transforms from essentially face-centered cubic (fcc) to hexagonal close packed as the cluster size approaches ~10(5) atoms. The transformation involves intermediate orthorhombic phases. These data confirm extant theoretical predictions. They also indicate that growth kinetics and spatial constraints might play an important role in the formation of the fcc structure of bulk rare-gas solids, which still remains puzzling.
Consecutive Charging of a Molecule-on-Insulator Ensemble Using Single Electron Tunnelling Methods.
Rahe, Philipp; Steele, Ryan P; Williams, Clayton C
2016-02-10
We present the local charge state modification at room temperature of small insulator-supported molecular ensembles formed by 1,1'-ferrocenedicarboxylic acid on calcite. Single electron tunnelling between the conducting tip of a noncontact atomic force microscope (NC-AFM) and the molecular islands is observed. By joining NC-AFM with Kelvin probe force microscopy, successive charge build-up in the sample is observed from consecutive experiments. Charge transfer within the islands and structural relaxation of the adsorbate/surface system is suggested by the experimental data.
Nogawa, Tomoaki; Ito, Nobuyasu; Watanabe, Hiroshi
2011-12-01
The evaporation-condensation transition of the Potts model on a square lattice is numerically investigated by the Wang-Landau sampling method. An intrinsically system-size-dependent discrete transition between supersaturation state and phase-separation state is observed in the microcanonical ensemble by changing constrained internal energy. We calculate the microcanonical temperature, as a derivative of microcanonical entropy, and condensation ratio, and perform a finite-size scaling of them to indicate the clear tendency of numerical data to converge to the infinite-size limit predicted by phenomenological theory for the isotherm lattice gas model.
Walcott, Sam
2013-03-01
Interactions between the proteins actin and myosin drive muscle contraction. Properties of a single myosin interacting with an actin filament are largely known, but a trillion myosins work together in muscle. We are interested in how single-molecule properties relate to ensemble function. Myosin's reaction rates depend on force, so ensemble models keep track of both molecular state and force on each molecule. These models make subtle predictions, e.g. that myosin, when part of an ensemble, moves actin faster than when isolated. This acceleration arises because forces between molecules speed reaction kinetics. Experiments support this prediction and allow parameter estimates. A model based on this analysis describes experiments from single molecule to ensemble. In vivo, actin is regulated by proteins that, when present, cause the binding of one myosin to speed the binding of its neighbors; binding becomes cooperative. Although such interactions preclude the mean field approximation, a set of linear ODEs describes these ensembles under simplified experimental conditions. In these experiments cooperativity is strong, with the binding of one molecule affecting ten neighbors on either side. We progress toward a description of myosin ensembles under physiological conditions.
Olsen, Seth
2015-01-01
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence
Probability-weighted ensembles of U.S. county-level climate projections for climate risk analysis
Rasmussen, D J; Kopp, Robert E
2015-01-01
Quantitative assessment of climate change risk requires a method for constructing probabilistic time series of changes in physical climate parameters. Here, we develop two such methods, Surrogate/Model Mixed Ensemble (SMME) and Monte Carlo Pattern/Residual (MCPR), and apply them to construct joint probability density functions (PDFs) of temperature and precipitation change over the 21st century for every county in the United States. Both methods produce $likely$ (67% probability) temperature and precipitation projections consistent with the Intergovernmental Panel on Climate Change's interpretation of an equal-weighted Coupled Model Intercomparison Project 5 (CMIP5) ensemble, but also provide full PDFs that include tail estimates. For example, both methods indicate that, under representative concentration pathway (RCP) 8.5, there is a 5% chance that the contiguous United States could warm by at least 8$^\\circ$C. Variance decomposition of SMME and MCPR projections indicate that background variability dominates...
Addor, Nans; Clark, Martyn P.; Mizukami, Naoki
2017-04-01
Climate change impacts on hydrological processes are typically assessed using small ensembles of hydrological models. That is, a handful of hydrological models are typically driven by a larger number of climate models. Such a setup has several limitations. Because the number of hydrological models is small, only a small proportion of the model space is sampled, likely leading to an underestimation of the uncertainties in the projections. Further, sampling is arbitrary: although hydrological models should be selected to provide a representative sample of existing models (in terms of complexity and governing hypotheses), they are instead usually selected based on legacy reasons. Furthermore, running several hydrological models currently constitutes a practical challenge because each model must be setup and calibrated individually. Finally, and probably most importantly, the differences between the projected impacts cannot be directly related to differences between hydrological models, because the models are different in almost every possible aspect. We are hence in a situation in which different hydrological models deliver different projections, but for reasons that are mostly unclear, and in which the uncertainty in the projections is probably underestimated. To overcome these limitations, we are experimenting with the flexible modeling framework FUSE (Framework for Understanding Model Errors). FUSE enables to construct conceptual models piece by piece (in a "pick and mix" approach), so it can be used to generate a large number of models that mimic existing models and/or models that differ from other models in single targeted respect (e.g. how baseflow is generated). FUSE hence allows for controlled modeling experiments, and for a more systematic and exhaustive sampling of the model space. Here we explore climate change impacts over the contiguous USA on a 12km grid using two groups of three models: the first group involves the commonly used models VIC, PRMS and HEC
Quantifying Monte Carlo uncertainty in ensemble Kalman filter
Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar
2009-01-15
This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs
Bayesian Processor of Ensemble for Precipitation Forecasting: A Development Plan
Toth, Z.; Krzysztofowicz, R.
2006-05-01
The Bayesian Processor of Ensemble (BPE) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It is a generalization of the Bayesian Processor of Output (BPO) developed by Krzysztofowicz and Maranzano for processing single values of multiple predictors into a posterior distribution function of a predictand. The BPE processes an ensemble of a predictand generated by multiple integrations of a numerical weather prediction (NWP) model, and optimally fuses the ensemble with climatic data in order to quantify uncertainty about the predictand. As is well known, Bayes theorem provides the optimal theoretical framework for fusing information from different sources and for obtaining the posterior distribution function of a predictand. Using a family of such distribution functions, a given raw ensemble can be mapped into a posterior ensemble, which is well calibrated, has maximum informativeness, and preserves the spatio-temporal and cross-variate dependence structure of the NWP output fields. The challenge is to develop and test the BPE suitable for operational forecasting. This talk will present the basic design components of the BPE, along with a discussion of the climatic and training data to be used in its potential application at the National Centers for Environmental Prediction (NCEP). The technique will be tested first on quasi-normally distributed variates and next on precipitation variates. For reasons of economy, the BPE will be applied on the relatively coarse resolution grid corresponding to the ensemble output, and then the posterior ensemble will be downscaled to finer grids such as that of the National Digital Forecast Database (NDFD).
Hwang, S.; Chang, S. J.; Graham, W. D.
2014-12-01
The ultimate goal of this study is to assess future water vulnerability over Florida, based on the change in precipitation and evapotranspiration estimated using the most advanced Global Climate Model (GCM) ensemble. We evaluated the skills of CMIP5 (Climate Model Inter-comparison project, phase 5) climate models in reproducing retrospective climatology over the state of Florida for the key climate variables important from the hydrological and agricultural perspectives (i.e., precipitation (Precp), maximum and minimum temperature (Tmax and Tmin), and wind speed (Ws)). The biases of raw CMIP5 were estimated using two different grid-based observational datasets as references. Based on the accuracy of various predictors such as mean climatology, temporal variability, extreme frequency, etc., the GCMs were ranked for each of the different reference datasets, climate variables, and predictors. The variation of the ranks was examined and rank-based GCM weights were assigned. The weights were then used to develop future ensembles (for 4 different RCP gas-emission scenarios) for the annual cycle of monthly mean and variance of precipitation and reference evapotranspiration (ETo). Finally the differences between the retrospective and future ensembles were investigated to assess future climate change impacts on water vulnerability using simple indices (e.g., ETo/Precp., drought index, and Standardized Precp. index). The uncertainties of the assessment were quantified by the spread range of ensembles and a reliability factor for the GCMs estimated using a measure of model biases and convergence criterion.
Zhang, Xianliang; Xiong, Zhe; Zhang, Xuezhen; Shi, Ying; Liu, Jiyuan; Shao, Quanqin; Yan, Xiaodong
2017-07-01
Human activities have caused substantial land use/cover change (LUCC) in China, especially in northeast China, the Loess Plateau and southern China. Three high-resolution regional climate models were used to simulate the impacts of LUCC on climate through one control experiment and three land use change experiments from 1980 to 2000. The results showed that multi-regional climate model ensemble simulations (the arithmetic ensemble mean (AEM) and Bayesian model averaging (BMA)) provide more accurate results than a single model in over 70% grid cells of study regions. Uncertainty was reduced when using the two ensemble methods. The results of the AEM and BMA ensembles showed that the temperatures decreased by 0.2-0.4 °C in northeast China, the Yangtze river valley and the north of the Loess Plateau, and by 0.6-1.0 °C in the south of the Loess Plateau in spring, autumn and winter. The AEM precipitations changed by - 40-40 mm in in spring and winter, and by - 100-100 mm in summer and autumn, while the BMA precipitations changed by - 20-20 mm in spring, autumn and winter, and by - 50-50 mm in summer. The seasonal precipitation decreased in northeast China and the Yangtze river valley, and increased in the Loess Plateau in most grid cells of study regions. Winter and spring precipitation decreased more in the Yangtze river valley and the Loess Plateau than in northeast China.
Keppenne, Christian; Vernieres, Guillaume; Rienecker, Michele; Jacob, Jossy; Kovach, Robin
2011-01-01
Satellite altimetry measurements have provided global, evenly distributed observations of the ocean surface since 1993. However, the difficulties introduced by the presence of model biases and the requirement that data assimilation systems extrapolate the sea surface height (SSH) information to the subsurface in order to estimate the temperature, salinity and currents make it difficult to optimally exploit these measurements. This talk investigates the potential of the altimetry data assimilation once the biases are accounted for with an ad hoc bias estimation scheme. Either steady-state or state-dependent multivariate background-error covariances from an ensemble of model integrations are used to address the problem of extrapolating the information to the sub-surface. The GMAO ocean data assimilation system applied to an ensemble of coupled model instances using the GEOS-5 AGCM coupled to MOM4 is used in the investigation. To model the background error covariances, the system relies on a hybrid ensemble approach in which a small number of dynamically evolved model trajectories is augmented on the one hand with past instances of the state vector along each trajectory and, on the other, with a steady state ensemble of error estimates from a time series of short-term model forecasts. A state-dependent adaptive error-covariance localization and inflation algorithm controls how the SSH information is extrapolated to the sub-surface. A two-step predictor corrector approach is used to assimilate future information. Independent (not-assimilated) temperature and salinity observations from Argo floats are used to validate the assimilation. A two-step projection method in which the system first calculates a SSH increment and then projects this increment vertically onto the temperature, salt and current fields is found to be most effective in reconstructing the sub-surface information. The performance of the system in reconstructing the sub-surface fields is particularly
E. Crestani
2012-11-01
Full Text Available The significance of estimating the spatial variability of the hydraulic conductivity K in natural aquifers is relevant to the possibility of defining the space and time evolution of a non-reactive plume, since the transport of a solute is mainly controlled by the heterogeneity of K. At the local scale, the spatial distribution of K can be inferred by combining the Lagrangian formulation of the transport with a Kalman filter-based technique and assimilating a sequence of time-lapse concentration C measurements, which, for example, can be evaluated on-site through the application of a geophysical method. The objective of this work is to compare the ensemble Kalman filter (EnKF and the ensemble smoother (ES capabilities to retrieve the hydraulic conductivity spatial distribution in a groundwater flow and transport modeling framework. The application refers to a two-dimensional synthetic aquifer in which a tracer test is simulated. Moreover, since Kalman filter-based methods are optimal only if each of the involved variables fit to a Gaussian probability density function (pdf and since this condition may not be met by some of the flow and transport state variables, issues related to the non-Gaussianity of the variables are analyzed and different transformation of the pdfs are considered in order to evaluate their influence on the performance of the methods. The results show that the EnKF reproduces with good accuracy the hydraulic conductivity field, outperforming the ES regardless of the pdf of the concentrations.
Seamless Hourly Rainfall Ensemble Forecasts for 0 - 10 days
Cooper, Shaun; Seed, Alan
2014-05-01
The Australian Bureau of Meteorology uses a number of Numerical Weather Prediction (NWP) models to generate deterministic rainfall forecasts over a range of lead-times, each with a different resolution in space and time and with different forecast domains. High resolution regional NWP models are used to generate forecasts for the first three days, and are typically more accurate than lower resolution Global NWP models that produce forecasts for longer lead times. Consequently, there is a requirement for a seamless forecast system that is able to blend the various NWP forecasts into a single forecast with a uniform resolution over the entire forecast period. NWP rainfall forecasts contain errors at scales that are significant for even large river basins, and ensemble hydrological prediction systems require ensembles of the order of 100 members, which is well beyond the size that can be generated by NWP ensemble systems. The idea, therefore, is to blend the NWP models in such a way that recognises the skill of the NWP at a particular scale and lead time and to use a stochastic model of forecast errors to perturb the blended deterministic forecast to generate a large ensemble. NWP uncertainties are scale and forecast lead time dependent, especially at long forecast lead times, and are characteristic to each model. By blending the models scale by scale it is possible to recognise the increased skill of the models at larger spatial scales and shorter lead times. The stochastic model is applied at each scale, adding increasingly more variability at smaller spatial scales, while preserving the space-time structure of rain. This process allows an ensemble to be generated by blending deterministic forecasts. Two NWP models from the Bureau, ACCESS-G (Global) (~40 km by ~40 km, 3 hourly out to 10 days) and ACCESS-R (Regional) (~12 km by ~12 km, 1 hourly out to 3 days), are downscaled and blended with the stochastic model to produce an ensemble of hourly forecasts out to 10
Hierarchical Bayes Ensemble Kalman Filter for geophysical data assimilation
Tsyrulnikov, Michael; Rakitko, Alexander
2016-04-01
In the Ensemble Kalman Filter (EnKF), the forecast error covariance matrix B is estimated from a sample (ensemble), which inevitably implies a degree of uncertainty. This uncertainty is especially large in high dimensions, where the affordable ensemble size is orders of magnitude less than the dimensionality of the system. Common remedies include ad-hoc devices like variance inflation and covariance localization. The goal of this study is to optimize the account for the inherent uncertainty of the B matrix in EnKF. Following the idea by Myrseth and Omre (2010), we explicitly admit that the B matrix is unknown and random and estimate it along with the state (x) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components P and Q of the B matrix into the extended control vector (x,P,Q). Similarly, we break the traditional forecast ensemble into the predictability-error related ensemble and model-error related ensemble. The reason for the separation of model errors from predictability errors is the fundamental difference between the two sources of error. Model error are external (i.e. do not depend on the filter's performance) whereas predictability errors are internal to a filter (i.e. are determined by the filter's behavior). At the analysis step, we specify Inverse Wishart based priors for the random matrices P and Q and conditionally Gaussian prior for the state x. Then, we update the prior distribution of (x,P,Q) using both observation and ensemble data, so that ensemble members are used as generalized observations and ordinary observations are allowed to influence the covariances. We show that for linear dynamics and linear observation operators, conditional Gaussianity of the state is preserved in the course of filtering. At the forecast
Application of evolutionary computation on ensemble forecast of quantitative precipitation
Dufek, Amanda S.; Augusto, Douglas A.; Dias, Pedro L. S.; Barbosa, Helio J. C.
2017-09-01
An evolutionary computation algorithm known as genetic programming (GP) has been explored as an alternative tool for improving the ensemble forecast of 24-h accumulated precipitation. Three GP versions and six ensembles' languages were applied to several real-world datasets over southern, southeastern and central Brazil during the rainy period from October to February of 2008-2013. According to the results, the GP algorithms performed better than two traditional statistical techniques, with errors 27-57% lower than simple ensemble mean and the MASTER super model ensemble system. In addition, the results revealed that GP algorithms outperformed the best individual forecasts, reaching an improvement of 34-42%. On the other hand, the GP algorithms had a similar performance with respect to each other and to the Bayesian model averaging, but the former are far more versatile techniques. Although the results for the six ensembles' languages are almost indistinguishable, our most complex linear language turned out to be the best overall proposal. Moreover, some meteorological attributes, including the weather patterns over Brazil, seem to play an important role in the prediction of daily rainfall amount.
An adaptive additive inflation scheme for Ensemble Kalman Filters
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
Ensemble Forecasting of Volcanic Emissions in Hawai’i
Andre Kristofer Pattantyus
2015-03-01
Full Text Available Deterministic model forecasts do not convey to the end users the forecast uncertainty the models possess as a result of physics parameterizations, simplifications in model representation of physical processes, and errors in initial conditions. This lack of understanding leads to a level of uncertainty in the forecasted value when only a single deterministic model forecast is available. Increasing computational power and parallel software architecture allows multiple simulations to be carried out simultaneously that yield useful measures of model uncertainty that can be derived from ensemble model results. The Hybrid Single Particle Lagrangian Integration Trajectory and Dispersion model has the ability to generate ensemble forecasts. A meteorological ensemble was formed to create probabilistic forecast products and an ensemble mean forecast for volcanic emissions from the Kilauea volcano that impacts the state of Hawai’i. The probabilistic forecast products show uncertainty in pollutant concentrations that are especially useful for decision-making regarding public health. Initial comparison of the ensemble mean forecasts with observations and a single model forecast show improvements in event timing for both sulfur dioxide and sulfate aerosol forecasts.
Ensemble Forecasting of Major Solar Flares -- First Results
Pulkkinen, A. A.; Guerra, J. A.; Uritsky, V. M.
2015-12-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte-Carlo-type algorithm that applies a decision threshold PthP_{th} to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the data for 13 recent solar active regions between years 2012 - 2014, we found that linear combination methods can improve the overall probabilistic prediction and improve the categorical prediction for certain values of decision thresholds. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of PthP_{th}. According to the maximum values of HSS, a performance-based weights calculated by averaging over the sample, performed similarly to a equally weighted model. The values PthP_{th} for which the ensemble forecast performs the best are 25 % for M-class flares and 15 % for X-class flares. When the human-adjusted probabilities from NOAA are excluded from the ensemble, the ensemble performance in terms of the Heidke score, is reduced.
Regularized negative correlation learning for neural network ensembles.
Chen, Huanhuan; Yao, Xin
2009-12-01
Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error (MSE) together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when lambda = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overfitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter lambda in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set.
Universal critical wrapping probabilities in the canonical ensemble
Hao Hu
2015-09-01
Full Text Available Universal dimensionless quantities, such as Binder ratios and wrapping probabilities, play an important role in the study of critical phenomena. We study the finite-size scaling behavior of the wrapping probability for the Potts model in the random-cluster representation, under the constraint that the total number of occupied bonds is fixed, so that the canonical ensemble applies. We derive that, in the limit L→∞, the critical values of the wrapping probability are different from those of the unconstrained model, i.e. the model in the grand-canonical ensemble, but still universal, for systems with 2yt−d>0 where yt=1/ν is the thermal renormalization exponent and d is the spatial dimension. Similar modifications apply to other dimensionless quantities, such as Binder ratios. For systems with 2yt−d≤0, these quantities share same critical universal values in the two ensembles. It is also derived that new finite-size corrections are induced. These findings apply more generally to systems in the canonical ensemble, e.g. the dilute Potts model with a fixed total number of vacancies. Finally, we formulate an efficient cluster-type algorithm for the canonical ensemble, and confirm these predictions by extensive simulations.
Probabilistic Determination of Native State Ensembles of Proteins.
Olsson, Simon; Vögeli, Beat Rolf; Cavalli, Andrea; Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten; Hamelryck, Thomas
2014-08-12
The motions of biological macromolecules are tightly coupled to their functions. However, while the study of fast motions has become increasingly feasible in recent years, the study of slower, biologically important motions remains difficult. Here, we present a method to construct native state ensembles of proteins by the combination of physical force fields and experimental data through modern statistical methodology. As an example, we use NMR residual dipolar couplings to determine a native state ensemble of the extensively studied third immunoglobulin binding domain of protein G (GB3). The ensemble accurately describes both local and nonlocal backbone fluctuations as judged by its reproduction of complementary experimental data. While it is difficult to assess precise time-scales of the observed motions, our results suggest that it is possible to construct realistic conformational ensembles of biomolecules very efficiently. The approach may allow for a dramatic reduction in the computational as well as experimental resources needed to obtain accurate conformational ensembles of biological macromolecules in a statistically sound manner.
Olsen, Seth, E-mail: seth.olsen@uq.edu.au [School of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072 (Australia)
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space
Trajectory study of dissociation reactions. The single-ensemble method. II
Kutz, H. Douglas; Burns, George
1981-04-01
The single uniform ensemble method was previously employed in 3D classical trajectory calculations [H. D. Kutz and G. Burns, J. Chem. Phys. 72, 3652 (1980)]. Presently it is applied to the Br2+Ar system to study nonequilbrium effects in diatom dissociation over a wide temperature range. It was found that, for a given large set of trajectories, observables, such as reaction cross sections or rate constants, are indepedent within four significant figures of the initial distribution function. This indicates a high degree of reliability of the single uniform ensemble method, once the choice of a set of trajectories is made. In order to study dissociation from the low lying energy states, the uniform velocity selection method in trajectory calculations was used. It was found that dissociation from these states contributes but little to the overall dissociation reaction. The latter finding is consistent with the attractive nature of the potential energy surface used, and constitutes an argument against those current theories of diatom dissociation reaction which explains experimental data by postulating a high probability of dissociation from low lying energy states of diatoms. It was found that the contribution from the low lying states to dissociation can be estimated with good accuracy using information theory expressions. Temperature dependence of nonequilibrium effects was investigated between 1 500 and 6 000 °K. In this range the nonequilibrium correction factor varies between 0.2 and 0.5. Angular momentum dependence of such observables as reaction rate constant and reaction cross section was investigated.
The effects of land surface process perturbations in a global ensemble forecast system
Deng, Guo; Zhu, Yuejian; Gong, Jiandong; Chen, Dehui; Wobus, Richard; Zhang, Zhe
2016-10-01
Atmospheric variability is driven not only by internal dynamics, but also by external forcing, such as soil states, SST, snow, sea-ice cover, and so on. To investigate the forecast uncertainties and effects of land surface processes on numerical weather prediction, we added modules to perturb soil moisture and soil temperature into NCEP's Global Ensemble Forecast System (GEFS), and compared the results of a set of experiments involving different configurations of land surface and atmospheric perturbation. It was found that uncertainties in different soil layers varied due to the multiple timescales of interactions between land surface and atmospheric processes. Perturbations of the soil moisture and soil temperature at the land surface changed sensible and latent heat flux obviously, as compared to the less or indirect land surface perturbation experiment from the day-to-day forecasts. Soil state perturbations led to greater variation in surface heat fluxes that transferred to the upper troposphere, thus reflecting interactions and the response to atmospheric external forcing. Various verification scores were calculated in this study. The results indicated that taking the uncertainties of land surface processes into account in GEFS could contribute a slight improvement in forecast skill in terms of resolution and reliability, a noticeable reduction in forecast error, as well as an increase in ensemble spread in an under-dispersive system. This paper provides a preliminary evaluation of the effects of land surface processes on predictability. Further research using more complex and suitable methods is needed to fully explore our understanding in this area.
Sailesh Ranjitkar
2014-08-01
Full Text Available The tree rhododendrons include the most widely distributed Himalayan Rhododendron species belonging to the subsection Arborea. Distributions of two members of this sub-species were modelled using bioclimatic data for current conditions (1950–2000. A subset of the least correlated bioclimatic variables was used for ecological niche modelling (ENM. We used an ENM ensemble method in the BiodiversityR R-package to map the suitable climatic space for tree rhododendrons based on 217 point location records. Ensemble bioclimatic models for tree rhododendrons had high predictive power with bioclimatic variables, which also separated the climatic spaces for the two species. Tree rhododendrons were found occurring in a wide range of climate and the distributional limits were associated with isothermality, temperature ranges, temperature of the wettest quarter, and precipitation of the warmest quarter of the year. The most suitable climatic space for tree rhododendrons was predicted to be in western Yunnan, China, with suitability declining towards the west and east. Its occurrence in a wide range of climatic settings with highly dissected habitats speaks to the adaptive capacity of the species, which might open up future options for their conservation planning in regions where they are listed as threatened.
Simulating European heatwaves with WRF - a multi-physics ensemble approach
Stegehuis, Annemiek; Vautard, Robert; Ciais, Philippe; Teuling, Ryan
2014-05-01
There is a need to simulate mega heatwaves as impacts are large and they are expected to become more frequent in the future. Current climate models are calibrated on the current climate without such impacting events. Studies with model ensembles have been done, but less with physics ensembles. Here we investigate what physics are suitable to simulate the heatwaves of 2003 (Europe) and 2010 (Russia) with WRF, a regional climate model. We run the model over 200 times with different combinations of physics. We find that only few combinations can simulate the observed temperatures during the heatwaves, but also during a normal summer. Monthly precipitation is mostly overestimated, while the observations of monthly global European radiation lay on average in the middle of the model simulations. Most of the variation between simulations is due to the convection scheme. We rank all runs based on observed temperature, precipitation and radiation. The 5 best performing runs are also tested for other regions and variables. In our opinion these physic combinations can best be used to perform further heatwave analysis when using WRF.
Large unbalanced credit scoring using Lasso-logistic regression ensemble.
Hong Wang
Full Text Available Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.
Superradiance with an ensemble of superconducting flux qubits
Lambert, Neill; Matsuzaki, Yuichiro; Kakuyanagi, Kosuke; Ishida, Natsuko; Saito, Shiro; Nori, Franco
2016-12-01
Superconducting flux qubits are a promising candidate for realizing quantum information processing and quantum simulations. Such devices behave like artificial atoms, with the advantage that one can easily tune the "atoms" internal properties. Here, by harnessing this flexibility, we propose a technique to minimize the inhomogeneous broadening of a large ensemble of flux qubits by tuning only the external flux. In addition, as an example of many-body physics in such an ensemble, we show how to observe superradiance, and its quadratic scaling with ensemble size, using a tailored microwave control pulse that takes advantage of the inhomogeneous broadening itself to excite only a subensemble of the qubits. Our scheme opens up an approach to using superconducting circuits to explore the properties of quantum many-body systems.
Circular β ensembles,CMV representation,characteristic polynomials
2009-01-01
In this note we first briefly review some recent progress in the study of the circular β ensemble on the unit circle,where β > 0 is a model parameter.In the special cases β = 1,2 and 4,this ensemble describes the joint probability density of eigenvalues of random orthogonal,unitary and sympletic matrices,respectively.For general β,Killip and Nenciu discovered a five-diagonal sparse matrix model,the CMV representation.This representation is new even in the case β = 2;and it has become a powerful tool for studying the circular β ensemble.We then give an elegant derivation for the moment identities of characteristic polynomials via the link with orthogonal polynomials on the unit circle.
Large unbalanced credit scoring using Lasso-logistic regression ensemble.
Wang, Hong; Xu, Qingsong; Zhou, Lifeng
2015-01-01
Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.
An ensemble perspective on multi-layer networks
Wider, Nicolas; Scholtes, Ingo; Schweitzer, Frank
2015-01-01
We study properties of multi-layered, interconnected networks from an ensemble perspective, i.e. we analyze ensembles of multi-layer networks that share similar aggregate characteristics. Using a diffusive process that evolves on a multi-layer network, we analyze how the speed of diffusion depends on the aggregate characteristics of both intra- and inter-layer connectivity. Through a block-matrix model representing the distinct layers, we construct transition matrices of random walkers on multi-layer networks, and estimate expected properties of multi-layer networks using a mean-field approach. In addition, we quantify and explore conditions on the link topology that allow to estimate the ensemble average by only considering aggregate statistics of the layers. Our approach can be used when only partial information is available, like it is usually the case for real-world multi-layer complex systems.
The interplay between cooperativity and diversity in model threshold ensembles.
Cervera, Javier; Manzanares, José A; Mafe, Salvador
2014-10-06
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
A Random Forest-based ensemble method for activity recognition.
Feng, Zengtao; Mo, Lingfei; Li, Meng
2015-01-01
This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.
Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics
Scheuerer, Michael
2013-01-01
Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...
Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification
Loris Nanni
2015-01-01
Full Text Available We perform an extensive study of the performance of different classification approaches on twenty-five datasets (fourteen image datasets and eleven UCI data mining datasets. The aim is to find General-Purpose (GP heterogeneous ensembles (requiring little to no parameter tuning that perform competitively across multiple datasets. The state-of-the-art classifiers examined in this study include the support vector machine, Gaussian process classifiers, random subspace of adaboost, random subspace of rotation boosting, and deep learning classifiers. We demonstrate that a heterogeneous ensemble based on the simple fusion by sum rule of different classifiers performs consistently well across all twenty-five datasets. The most important result of our investigation is demonstrating that some very recent approaches, including the heterogeneous ensemble we propose in this paper, are capable of outperforming an SVM classifier (implemented with LibSVM, even when both kernel selection and SVM parameters are carefully tuned for each dataset.
Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification.
Nanni, Loris; Brahnam, Sheryl; Ghidoni, Stefano; Lumini, Alessandra
2015-01-01
We perform an extensive study of the performance of different classification approaches on twenty-five datasets (fourteen image datasets and eleven UCI data mining datasets). The aim is to find General-Purpose (GP) heterogeneous ensembles (requiring little to no parameter tuning) that perform competitively across multiple datasets. The state-of-the-art classifiers examined in this study include the support vector machine, Gaussian process classifiers, random subspace of adaboost, random subspace of rotation boosting, and deep learning classifiers. We demonstrate that a heterogeneous ensemble based on the simple fusion by sum rule of different classifiers performs consistently well across all twenty-five datasets. The most important result of our investigation is demonstrating that some very recent approaches, including the heterogeneous ensemble we propose in this paper, are capable of outperforming an SVM classifier (implemented with LibSVM), even when both kernel selection and SVM parameters are carefully tuned for each dataset.
Quantum teleportation between remote atomic-ensemble quantum memories
Bao, Xiao-Hui; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei
2012-01-01
Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel", quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of 100 million rubidium atoms and connected by a 150-meter optical fiber. The spinwave state of one atomic ensemble is mapped to a propagating photon, and subjected to Bell-state measurements with another single photon that is entangled with the spinwave state of the other ensemble. Two-photon detection events herald the succe...
Properties of the Affine Invariant Ensemble Sampler in high dimensions
Huijser, David; Brewer, Brendon J
2015-01-01
We present theoretical and practical properties of the affine-invariant ensemble sampler Markov chain Monte Carlo method. In high dimensions the affine-invariant ensemble sampler shows unusual and undesirable properties. We demonstrate this with an $n$-dimensional correlated Gaussian toy problem with a known mean and covariance structure, and analyse the burn-in period. The burn-in period seems to be short, however upon closer inspection we discover the mean and the variance of the target distribution do not match the expected, known values. This problem becomes greater as $n$ increases. We therefore conclude that the affine-invariant ensemble sampler should be used with caution in high dimensional problems. We also present some theoretical results explaining this behaviour.
Support vector machine ensemble using rough sets theory
无
2006-01-01
A support vector machine (SVM) ensemble classifier is proposed. Performance of SVM trained in an input space consisting of all the information from many sources is not always good. The strategy that the original input space is partitioned into several input subspaces usually works for improving the performance. Different from conventional partition methods, the partition method used in this paper, rough sets theory based attribute reduction, allows the input subspaces partially overlapped. These input subspaces can offer complementary information about hidden data patterns. In every subspace, an SVM sub-classifier is learned. With the information fusion techniques, those SVM sub-classifiers with better performance are selected and combined to construct an SVM ensemble. The proposed method is applied to decisionmaking of medical diagnosis. Comparison of performance between our method and several other popular ensemble methods is done. Experimental results demonstrate that our proposed approach can make full use of the information contained in data and improve the decision-making performance.
Purification of an unpolarized spin ensemble into entangled singlet pairs
Greiner, Johannes N; Wrachtrup, Jörg
2016-01-01
Dynamical polarization of nuclear spin ensembles is of central importance for magnetic resonance studies, precision sensing and for applications in quantum information theory. Here we propose a scheme to generate long-lived singlet pairs in an unpolarized nuclear spin ensemble which is dipolar coupled to the electron spins of a Nitrogen Vacancy center in diamond. The quantum mechanical back-action induced by frequent spin-selective readout of the NV centers allows the nuclear spins to pair up into maximally entangled singlet pairs. Counterintuitively, the robustness of the pair formation to dephasing noise improves with increasing size of the spin ensemble. We also show how the paired nuclear spin state allows for enhanced sensing capabilities of NV centers in diamond.
Evaluation of LDA Ensembles Classifiers for Brain Computer Interface
Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo
2011-12-01
The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.
Ensemble-based Probabilistic Forecasting at Horns Rev
Pinson, Pierre; Madsen, Henrik
2009-01-01
of probabilistic forecasts, the resolution of which may be maximized by using meteorological ensemble predictions as input. The paper concentrates on the test case of the Horns Rev wind form over a period of approximately 1 year, in order to describe, apply and discuss a complete ensemble-based probabilistic...... the benefit of yielding predictive distributions that are of increased reliability (in a probabilistic sense) in comparison with the raw ensemble forecasts, at the some time taking advantage of their high resolution. Copyright (C) 2008 John Wiley & Sons, Ltd....... are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which ore recursively estimated in order to maximize the overall skill of obtained predictive distributions. Such a methodology has...