WorldWideScience

Sample records for analysis combining ensemble

  1. Combining 2-m temperature nowcasting and short range ensemble forecasting

    Directory of Open Access Journals (Sweden)

    A. Kann

    2011-12-01

    Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous

  2. Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions

    Science.gov (United States)

    Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.

    2009-01-01

    This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in

  3. A modified algorithm of the combined ensemble empirical mode decomposition and independent component analysis for the removal of cardiac artifacts from neuromuscular electrical signals

    International Nuclear Information System (INIS)

    Neuronal and muscular electrical signals contain useful information about the neuromuscular system, with which researchers have been investigating the relationship of various neurological disorders and the neuromuscular system. However, neuromuscular signals can be critically contaminated by cardiac electrical activity (CEA) such as the electrocardiogram (ECG) which confounds data analysis. The purpose of our study is to provide a method for removing cardiac electrical artifacts from the neuromuscular signals recorded. We propose a new method for cardiac artifact removal which modifies the algorithm combining ensemble empirical mode decomposition (EEMD) and independent component analysis (ICA). We compare our approach with a cubic smoothing spline method and the previous combined EEMD and ICA for various signal-to-noise ratio measures in simulated noisy physiological signals using a surface electromyogram (sEMG). Finally, we apply the proposed method to two real-life sets of data such as sEMG with ECG artifacts and ambulatory dog cardiac autonomic nervous signals measured from the ganglia near the heart, which are also contaminated with CEA. Our method can not only extract and remove artifacts, but can also preserve the spectral content of the neuromuscular signals. (paper)

  4. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong

    2010-09-19

    The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  5. Space Applications for Ensemble Detection and Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Ensemble Detection is both a measurement technique and analysis tool. Like a prism that separates light into spectral bands, an ensemble detector mixes a signal...

  6. Gradient Flow Analysis on MILC HISQ Ensembles

    CERN Document Server

    Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Komijani, J; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R

    2014-01-01

    We report on a preliminary scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales $\\sqrt{t_0}/a$ and $w_0/a$ are computed using Symanzik flow and the cloverleaf definition of $\\langle E \\rangle$ on each ensemble. Then both scales and the meson masses $aM_\\pi$ and $aM_K$ are adjusted for mistunings in the charm mass. Using a combination of continuum chiral perturbation theory and a Taylor series ansatz in the lattice spacing, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. Our preliminary results are $\\sqrt{t_0} = 0.1422(7)$fm and $w_0 = 0.1732(10)$fm. We also find the continuum mass-dependence of $w_0$.

  7. Impact of hybrid GSI analysis using ETR ensembles

    Indian Academy of Sciences (India)

    V S Prasad; C J Johny

    2016-04-01

    Performance of a hybrid assimilation system combining 3D Var based NGFS (NCMRWF Global ForecastSystem) with ETR (Ensemble Transform with Rescaling) based Global Ensemble Forecast (GEFS) ofresolution T-190L28 is investigated. The experiment is conducted for a period of one week in June 2013and forecast skills over different spatial domains are compared with respect to mean analysis state.Rainfall forecast is verified over Indian region against combined observations of IMD and NCMRWF.Hybrid assimilation produced marginal improvements in overall forecast skill in comparison with 3DVar. Hybrid experiment made significant improvement in wind forecasts in all the regions on verificationagainst mean analysis. The verification of forecasts with radiosonde observations also show improvementin wind forecasts with the hybrid assimilation. On verification against observations, hybrid experimentshows more improvement in temperature and wind forecasts at upper levels. Both hybrid and operational3D Var failed in prediction of extreme rainfall event over Uttarakhand on 17 June, 2013.

  8. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and

  9. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and

  10. An educational model for ensemble streamflow simulation and uncertainty analysis

    Directory of Open Access Journals (Sweden)

    A. AghaKouchak

    2013-02-01

    Full Text Available This paper presents the hands-on modeling toolbox, HBV-Ensemble, designed as a complement to theoretical hydrology lectures, to teach hydrological processes and their uncertainties. The HBV-Ensemble can be used for in-class lab practices and homework assignments, and assessment of students' understanding of hydrological processes. Using this modeling toolbox, students can gain more insights into how hydrological processes (e.g., precipitation, snowmelt and snow accumulation, soil moisture, evapotranspiration and runoff generation are interconnected. The educational toolbox includes a MATLAB Graphical User Interface (GUI and an ensemble simulation scheme that can be used for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity. HBV-Ensemble was administered in a class for both in-class instruction and a final project, and students submitted their feedback about the toolbox. The results indicate that this educational software had a positive impact on students understanding and knowledge of uncertainty in hydrological modeling.

  11. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Wohling, Thomas [NON LANL

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  12. A multi-model ensemble method that combines imperfect models through learning

    OpenAIRE

    Berge, L.A.; F. M. Selten; Wiegerinck, W.; Duane, G. S.

    2010-01-01

    In the current multi-model ensemble approach climate model simulations are combined a posteriori. In the method of this study the models in the ensemble exchange information during simulations and learn from historical observations to combine their strengths into a best representation of the observed climate. The method is developed and tested in the context of small chaotic dynamical systems, like the Lorenz 63 system. Imperfect models are created by perturbing the standard parameter ...

  13. Development of the Ensemble Navy Aerosol Analysis Prediction System (ENAAPS) and its application of the Data Assimilation Research Testbed (DART) in support of aerosol forecasting

    OpenAIRE

    J. I. Rubin; Reid, J. S.; Hansen, J A; Anderson, J. L.; Collins, N.; Hoar, T. J.; Hogan, T; Lynch, P.; McLay, J; Reynolds, C. A.; W. R. Sessions; D. L. Westphal; Zhang, J.

    2015-01-01

    An ensemble-based forecast and data assimilation system has been developed for use in Navy aerosol forecasting. The system makes use of an ensemble of the Navy Aerosol Analysis Prediction System (ENAAPS) at 1° × 1°, combined with an Ensemble Adjustment Kalman Filter from NCAR's Data Assimilation Research Testbed (DART). The base ENAAPS-DART system discussed in this work utilizes the Navy Operational Global Analysis Prediction System (NOGAPS) meteorological ensemble to ...

  14. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  15. Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions

    CERN Document Server

    Seni, Giovanni

    2010-01-01

    This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e

  16. An educational model for ensemble streamflow simulation and uncertainty analysis

    Directory of Open Access Journals (Sweden)

    A. AghaKouchak

    2012-06-01

    Full Text Available This paper presents a hands-on modeling toolbox, HBV-Ensemble, designed as a complement to theoretical hydrology lectures, to teach hydrological processes and their uncertainties. The HBV-Ensemble can be used for in-class lab practices and homework assignments, and assessment of students' understanding of hydrological processes. Using this model, students can gain more insights into how hydrological processes (e.g., precipitation, snowmelt and snow accumulation, soil moisture, evapotranspiration and runoff generation are interconnected. The model includes a MATLAB Graphical User Interface (GUI and an ensemble simulation scheme that can be used for not only hydrological processes, but also for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity.

  17. Maximization of seasonal forecasts performance combining Grand Multi-Model Ensembles

    Science.gov (United States)

    Alessandri, Andrea; De Felice, Matteo; Catalano, Franco; Lee, Doo Young; Yoo, Jin Ho; Lee, June-Yi; Wang, Bin

    2014-05-01

    Multi-Model Ensembles (MMEs) are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model errors. Previous works suggested that the potential benefit that can be expected by using a MME amplify with the increase of the independence of the contributing Seasonal Prediction Systems. In this work we combine the two Multi Model Ensemble (MME) Seasonal Prediction Systems (SPSs) independently developed by the European (ENSEMBLES) and by the Asian-Pacific (CliPAS/APCC) communities. To this aim, all the possible multi-model combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from CliPAS/APCC have been evaluated. The grand ENSEMBLES-CliPAS/APCC Multi-Model enhances significantly the skill compared to previous estimates from the contributing MMEs. The combinations of SPSs maximizing the skill that is currently attainable for specific predictands/phenomena is evaluated. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and CliPAS/APCC models and that only a limited number of SPSs is required to obtain the maximum performance. The number and selection of models that perform better is usually different depending on the region/phenomenon under consideration. As an example for the tropical Pacific, the maximum performance is obtained with only the combination of 5-to-6 SPSs from the grand ENSEMBLES-CliPAS/APCC MME. With particular focus over Tropical Pacific, the relationship between performance and bias of the grand-MME combinations is evaluated. The skill of the grand-MME combinations over Euro-Mediterranean and East-Asia regions is further evaluated as a function of the capability of the selected contributing SPSs to forecast anomalies of the Polar/Siberian highs during winter and of the Asian summer monsoon precipitation during summer. Our results indicate that, combining SPSs from independent MME sources is a good

  18. Combining large model ensembles with extreme value statistics to improve attribution statements of rare events

    Directory of Open Access Journals (Sweden)

    Sebastian Sippel

    2015-09-01

    In conclusion, our study shows that EVT and empirical estimates based on numerical simulations can indeed be used to productively inform each other, for instance to derive appropriate EVT parameters for short observational time series. Further, the combination of ensemble simulations with EVT allows us to significantly reduce the number of simulations needed for statements about the tails.

  19. ANALYSIS OF SST IMAGES BY WEIGHTED ENSEMBLE TRANSFORM KALMAN FILTER

    OpenAIRE

    Sai, Gorthi; Beyou, Sébastien; Memin, Etienne

    2011-01-01

    International audience This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based onWeighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas o...

  20. Monthly water balance modeling: Probabilistic, possibilistic and hybrid methods for model combination and ensemble simulation

    Science.gov (United States)

    Nasseri, M.; Zahraie, B.; Ajami, N. K.; Solomatine, D. P.

    2014-04-01

    Multi-model (ensemble, or committee) techniques have shown to be an effective way to improve hydrological prediction performance and provide uncertainty information. This paper presents two novel multi-model ensemble techniques, one probabilistic, Modified Bootstrap Ensemble Model (MBEM), and one possibilistic, FUzzy C-means Ensemble based on data Pattern (FUCEP). The paper also explores utilization of the Ordinary Kriging (OK) method as a multi-model combination scheme for hydrological simulation/prediction. These techniques are compared against Bayesian Model Averaging (BMA) and Weighted Average (WA) methods to demonstrate their effectiveness. The mentioned techniques are applied to the three monthly water balance models used to generate stream flow simulations for two mountainous basins in the South-West of Iran. For both basins, the results demonstrate that MBEM and FUCEP generate more skillful and reliable probabilistic predictions, outperforming all the other techniques. We have also found that OK did not demonstrate any improved skill as a simple combination method over WA scheme for neither of the basins.

  1. Combining meteorological ensemble prediction, data assimilation and hydrological multimodel to reduce and untangle sources of uncertainty

    Science.gov (United States)

    Thiboult, Antoine; Anctil, François; Boucher, Marie-Amélie

    2015-04-01

    Hydrological ensemble prediction systems offer the possibility to dynamically assess forecast uncertainty. An ensemble may be issued wherever the uncertainty is situated along the meteorological chain. We commonly identify three main sources of uncertainty: meteorological forcing, hydrological initial conditions, and structural and parameter uncertainty. To address these uncertainties, different techniques have been developed. Meteorological ensemble prediction systems gained in popularity among researchers and operational forecasters as it allows to account for forcing uncertainties. Many data assimilation techniques have been applied to hydrology to reinitialize model states in order to issue more accurate and sharper predictive density functions. At last, multimodel simulation allows to get away from the quest of single best parameter and structure pitfall. The knowledge about these individual techniques is getting extensive and many individual applications can be found in the literature. Even though they proved to improve upon traditional forecasting, they frequently fail to issue fully reliable hydrological forecast as all sources of uncertainty are not tackled. Therefore, an improvement can be obtained in combining them, as it provides a more comprehensive handling of errors. Moreover, using these techniques separately or in combination allows to issue more reliable forecasts but also to identify explicitly the amount of total uncertainty that each technique accounts for. At the end, these sources of error can be characterized in terms of magnitude and lead time influence. As these techniques are frequently used alone, they are usually tuned to perform individually. To reach optimal performance, they should be set jointly. Among them, the data assimilation technique offers a large flexibility in its setting and therefore requires a proper setting considering the other ensemble techniques used. This question is also raised for the hydrological model selection

  2. A one-way coupled atmospheric-hydrological modeling system with combination of high-resolution and ensemble precipitation forecasting

    Science.gov (United States)

    Wu, Zhiyong; Wu, Juan; Lu, Guihua

    2015-11-01

    Coupled hydrological and atmospheric modeling is an effective tool for providing advanced flood forecasting. However, the uncertainties in precipitation forecasts are still considerable. To address uncertainties, a one-way coupled atmospheric-hydrological modeling system, with a combination of high-resolution and ensemble precipitation forecasting, has been developed. It consists of three high-resolution single models and four sets of ensemble forecasts from the THORPEX Interactive Grande Global Ensemble database. The former provides higher forecasting accuracy, while the latter provides the range of forecasts. The combined precipitation forecasting was then implemented to drive the Chinese National Flood Forecasting System in the 2007 and 2008 Huai River flood hindcast analysis. The encouraging results demonstrated that the system can clearly give a set of forecasting hydrographs for a flood event and has a promising relative stability in discharge peaks and timing for warning purposes. It not only gives a deterministic prediction, but also generates probability forecasts. Even though the signal was not persistent until four days before the peak discharge was observed in the 2007 flood event, the visualization based on threshold exceedance provided clear and concise essential warning information at an early stage. Forecasters could better prepare for the possibility of a flood at an early stage, and then issue an actual warning if the signal strengthened. This process may provide decision support for civil protection authorities. In future studies, different weather forecasts will be assigned various weight coefficients to represent the covariance of predictors and the extremes of distributions.

  3. Collaborative Ensemble Learning: Combining Collaborative and Content-Based Information Filtering via Hierarchical Bayes

    OpenAIRE

    Yu, Kai; Schwaighofer, Anton; Tresp, Volker; Ma, Wei-Ying; Zhang, Hongjiang

    2012-01-01

    Collaborative filtering (CF) and content-based filtering (CBF) have widely been used in information filtering applications. Both approaches have their strengths and weaknesses which is why researchers have developed hybrid systems. This paper proposes a novel approach to unify CF and CBF in a probabilistic framework, named collaborative ensemble learning. It uses probabilistic SVMs to model each user's profile (as CBF does).At the prediction phase, it combines a society OF users profiles, rep...

  4. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  5. Comprehensive Study on Lexicon-based Ensemble Classification Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Łukasz Augustyniak

    2015-12-01

    Full Text Available We propose a novel method for counting sentiment orientation that outperforms supervised learning approaches in time and memory complexity and is not statistically significantly different from them in accuracy. Our method consists of a novel approach to generating unigram, bigram and trigram lexicons. The proposed method, called frequentiment, is based on calculating the frequency of features (words in the document and averaging their impact on the sentiment score as opposed to documents that do not contain these features. Afterwards, we use ensemble classification to improve the overall accuracy of the method. What is important is that the frequentiment-based lexicons with sentiment threshold selection outperform other popular lexicons and some supervised learners, while being 3–5 times faster than the supervised approach. We compare 37 methods (lexicons, ensembles with lexicon’s predictions as input and supervised learners applied to 10 Amazon review data sets and provide the first statistical comparison of the sentiment annotation methods that include ensemble approaches. It is one of the most comprehensive comparisons of domain sentiment analysis in the literature.

  6. Ovis: A framework for visual analysis of ocean forecast ensembles

    KAUST Repository

    Hollt, Thomas

    2014-08-01

    We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea. © 1995-2012 IEEE.

  7. Effective Visualization of Temporal Ensembles.

    Science.gov (United States)

    Hao, Lihua; Healey, Christopher G; Bass, Steffen A

    2016-01-01

    An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions. PMID:26529728

  8. Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification

    International Nuclear Information System (INIS)

    We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature. (paper)

  9. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  10. Combining linear interpolation with extrapolation methods in range-separated ensemble density-functional theory

    CERN Document Server

    Senjean, Bruno; Alam, Md Mehboob; Knecht, Stefan; Fromager, Emmanuel

    2015-01-01

    The combination of a recently proposed linear interpolation method (LIM) [Senjean et al., Phys. Rev. A 92, 012518 (2015)], which enables the calculation of weight-independent excitation energies in range-separated ensemble density-functional approximations, with the extrapolation scheme of Savin [J. Chem. Phys. 140, 18A509 (2014)] is presented in this work. It is shown that LIM excitation energies vary quadratically with the inverse of the range-separation parameter mu when the latter is large. As a result, the extrapolation scheme, which is usually applied to long-range interacting energies, can be adapted straightforwardly to LIM. This extrapolated LIM (ELIM) has been tested on a small test set consisting of He, Be, H2 and HeH+. Relatively accurate results have been obtained for the first singlet excitation energies with the typical mu=0.4 value. The improvement of LIM after extrapolation is remarkable, in particular for the doubly-excited 2^1Sigma+g state in the stretched H2 molecule. Three-state ensemble ...

  11. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    Science.gov (United States)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  12. Time and ensemble averaging in time series analysis

    CERN Document Server

    Latka, Miroslaw; Jernajczyk, Wojciech; West, Bruce J

    2010-01-01

    In many applications expectation values are calculated by partitioning a single experimental time series into an ensemble of data segments of equal length. Such single trajectory ensemble (STE) is a counterpart to a multiple trajectory ensemble (MTE) used whenever independent measurements or realizations of a stochastic process are available. The equivalence of STE and MTE for stationary systems was postulated by Wang and Uhlenbeck in their classic paper on Brownian motion (Rev. Mod. Phys. 17, 323 (1945)) but surprisingly has not yet been proved. Using the stationary and ergodic paradigm of statistical physics -- the Ornstein-Uhlenbeck (OU) Langevin equation, we revisit Wang and Uhlenbeck's postulate. In particular, we find that the variance of the solution of this equation is different for these two ensembles. While the variance calculated using the MTE quantifies the spreading of independent trajectories originating from the same initial point, the variance for STE measures the spreading of two correlated r...

  13. Development of the Ensemble Navy Aerosol Analysis Prediction System (ENAAPS) and its application of the Data Assimilation Research Testbed (DART) in support of aerosol forecasting

    Science.gov (United States)

    Rubin, Juli I.; Reid, Jeffrey S.; Hansen, James A.; Anderson, Jeffrey L.; Collins, Nancy; Hoar, Timothy J.; Hogan, Timothy; Lynch, Peng; McLay, Justin; Reynolds, Carolyn A.; Sessions, Walter R.; Westphal, Douglas L.; Zhang, Jianglong

    2016-03-01

    An ensemble-based forecast and data assimilation system has been developed for use in Navy aerosol forecasting. The system makes use of an ensemble of the Navy Aerosol Analysis Prediction System (ENAAPS) at 1 × 1°, combined with an ensemble adjustment Kalman filter from NCAR's Data Assimilation Research Testbed (DART). The base ENAAPS-DART system discussed in this work utilizes the Navy Operational Global Analysis Prediction System (NOGAPS) meteorological ensemble to drive offline NAAPS simulations coupled with the DART ensemble Kalman filter architecture to assimilate bias-corrected MODIS aerosol optical thickness (AOT) retrievals. This work outlines the optimization of the 20-member ensemble system, including consideration of meteorology and source-perturbed ensemble members as well as covariance inflation. Additional tests with 80 meteorological and source members were also performed. An important finding of this work is that an adaptive covariance inflation method, which has not been previously tested for aerosol applications, was found to perform better than a temporally and spatially constant covariance inflation. Problems were identified with the constant inflation in regions with limited observational coverage. The second major finding of this work is that combined meteorology and aerosol source ensembles are superior to either in isolation and that both are necessary to produce a robust system with sufficient spread in the ensemble members as well as realistic correlation fields for spreading observational information. The inclusion of aerosol source ensembles improves correlation fields for large aerosol source regions, such as smoke and dust in Africa, by statistically separating freshly emitted from transported aerosol species. However, the source ensembles have limited efficacy during long-range transport. Conversely, the meteorological ensemble generates sufficient spread at the synoptic scale to enable observational impact through the ensemble data

  14. Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...

  15. Climate Prediction Center (CPC)Ensemble Canonical Correlation Analysis 90-Day Seasonal Forecast of Precipitation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) precipitation forecast is a 90-day (seasonal) outlook of US surface precipitation anomalies. The ECCA uses...

  16. Nanoelectrode ensemble based on multiwalled carbon nanotubes for electrochemical analysis

    OpenAIRE

    Музика, Катерина Миколаївна; Білаш, Олена Михайлівна

    2012-01-01

    The technique of nanoelectrode ensembles development based on multiwall carbon nanotubes has been demonstrated. The obtained NEE has higher Faraday/capacitive current ratio compared to conventional electrodes of the same area, indicating a lower limit of redox-active compounds detection

  17. Modelling irrigated maize with a combination of coupled-model simulation and ensemble forecasting, in the west of China

    Science.gov (United States)

    Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.

    2011-04-01

    The hydrologic model HYDRUS-1D and the crop growth model WOFOST were coupled to efficiently manage water resources in agriculture and improve the prediction of crop production through the accurate estimation of actual transpiration with the root water uptake method and a soil moisture profile computed with the Richards equation during crop growth. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement was achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under maize crop. However, for regions without detailed observation, the results of the numerical simulation could be unreliable for policy and decision making owing to the uncertainty of model boundary conditions and parameters. So, we developed the method of combining model simulation and ensemble forecasting to analyse and predict the probability of crop production. In our studies, the uncertainty analysis was used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis was used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method could be used for estimation in regions with no or reduced data availability.

  18. Modelling irrigated maize with a combination of coupled-model simulation and ensemble forecasting, in the west of China

    Directory of Open Access Journals (Sweden)

    Y. Li

    2011-04-01

    Full Text Available The hydrologic model HYDRUS-1D and the crop growth model WOFOST were coupled to efficiently manage water resources in agriculture and improve the prediction of crop production through the accurate estimation of actual transpiration with the root water uptake method and a soil moisture profile computed with the Richards equation during crop growth. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement was achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under maize crop. However, for regions without detailed observation, the results of the numerical simulation could be unreliable for policy and decision making owing to the uncertainty of model boundary conditions and parameters. So, we developed the method of combining model simulation and ensemble forecasting to analyse and predict the probability of crop production. In our studies, the uncertainty analysis was used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis was used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method could be used for estimation in regions with no or reduced data availability.

  19. An Ensemble Learning Based Framework for Traditional Chinese Medicine Data Analysis with ICD-10 Labels

    OpenAIRE

    Gang Zhang; Yonghui Huang; Ling Zhong; Shanxing Ou; Yi Zhang; Ziping Li

    2015-01-01

    Objective. This study aims to establish a model to analyze clinical experience of TCM veteran doctors. We propose an ensemble learning based framework to analyze clinical records with ICD-10 labels information for effective diagnosis and acupoints recommendation. Methods. We propose an ensemble learning framework for the analysis task. A set of base learners composed of decision tree (DT) and support vector machine (SVM) are trained by bootstrapping the training dataset. The base learners are...

  20. An Ensemble 4D Seismic History Matching Framework with Sparse Representation Based on Wavelet Multiresolution Analysis

    OpenAIRE

    Luo, Xiaodong; Bhakta, Tuhin; Jakobsen, Morten; Nævdal, Geir

    2016-01-01

    In this work we propose an ensemble 4D seismic history matching framework for reservoir characterization. Compared to similar existing frameworks in reservoir engineering community, the proposed one consists of some relatively new ingredients, in terms of the type of seismic data in choice, wavelet multiresolution analysis for the chosen seismic data and related data noise estimation, and the use of recently developed iterative ensemble history matching algorithms. Typical seismic data used f...

  1. Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification

    Science.gov (United States)

    Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang

    2016-07-01

    This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.

  2. Analysis of ensemble quality of initialzed hindcasts in the global coupled climate model MPI-ESM

    Science.gov (United States)

    Brune, Sebastian; Düsterhus, Andre; Baehr, Johanna

    2016-04-01

    Global coupled climate models have been used to generate long-term projections of potential climate changes for the next century. On much shorter timescales, numerical weather prediction systems forecast the atmospheric state for the next days. The first approach depends largely on the boundary conditions, i.e., the applied external forcings, while the second depends largely on the initial conditions, i.e., the observed atmospheric state. For medium range climate predictions, on interannual to decadal time scales, both initial and boundary conditions are thought to influence the climate state, because the ocean is expected to have a much larger deterministic timescale than the atmosphere. The respective climate model needs to resemble the observed climate state and its tendency at the start of the prediction. This is realized by incorporating observations into both the oceanic and atmospheric components of the climate model leading to an initialized simulation. Here, we analyze the quality of an initialized ensemble generated with the global coupled Max Planck Institute for Meteorology Earth System Model (MPI-ESM). We initialize for every year for the time period 1960 to 2014 an ensemble run out to 10 yaers length. This hindcast ensemble is conducted within the MiKlip framework for interannual to decadal climate prediction. In this context, the initialization of the oceanic component of the model ensemble is thought to impact the model state within the first years of prediction, however, it remains poorly known, for how much longer this impact can be detected. In our analysis we focus on the North Atlantic ocean variability and assess the evolution in time of both the probability density function (PDF) and the spread-error-ratio of the ensemble. Firstly, by comparing these characteristics of the initialized ensemble with an uninitialized ensemble we aim to (1) measure the difference in the initialized and uninitialized ensemble, (2) assess the evolution of this

  3. The analysis of ensembles of moderately saturated interstellar lines

    Science.gov (United States)

    Jenkins, E. B.

    1986-01-01

    It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented.

  4. Analysis of ensembles of moderately saturated interstellar lines

    International Nuclear Information System (INIS)

    It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented. 39 references

  5. A MITgcm/DART ensemble analysis and prediction system with application to the Gulf of Mexico

    KAUST Repository

    Hoteit, Ibrahim

    2013-09-01

    This paper describes the development of an advanced ensemble Kalman filter (EnKF)-based ocean data assimilation system for prediction of the evolution of the loop current in the Gulf of Mexico (GoM). The system integrates the Data Assimilation Research Testbed (DART) assimilation package with the Massachusetts Institute of Technology ocean general circulation model (MITgcm). The MITgcm/DART system supports the assimilation of a wide range of ocean observations and uses an ensemble approach to solve the nonlinear assimilation problems. The GoM prediction system was implemented with an eddy-resolving 1/10th degree configuration of the MITgcm. Assimilation experiments were performed over a 6-month period between May and October during a strong loop current event in 1999. The model was sequentially constrained with weekly satellite sea surface temperature and altimetry data. Experiments results suggest that the ensemble-based assimilation system shows a high predictive skill in the GoM, with estimated ensemble spread mainly concentrated around the front of the loop current. Further analysis of the system estimates demonstrates that the ensemble assimilation accurately reproduces the observed features without imposing any negative impact on the dynamical balance of the system. Results from sensitivity experiments with respect to the ensemble filter parameters are also presented and discussed. © 2013 Elsevier B.V.

  6. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    Science.gov (United States)

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method. PMID:25571123

  7. Analysis of the interface variability in NMR structure ensembles of protein-protein complexes.

    Science.gov (United States)

    Calvanese, Luisa; D'Auria, Gabriella; Vangone, Anna; Falcigno, Lucia; Oliva, Romina

    2016-06-01

    NMR structures consist in ensembles of conformers, all satisfying the experimental restraints, which exhibit a certain degree of structural variability. We analyzed here the interface in NMR ensembles of protein-protein heterodimeric complexes and found it to span a wide range of different conservations. The different exhibited conservations do not simply correlate with the size of the systems/interfaces, and are most probably the result of an interplay between different factors, including the quality of experimental data and the intrinsic complex flexibility. In any case, this information is not to be missed when NMR structures of protein-protein complexes are analyzed; especially considering that, as we also show here, the first NMR conformer is usually not the one which best reflects the overall interface. To quantify the interface conservation and to analyze it, we used an approach originally conceived for the analysis and ranking of ensembles of docking models, which has now been extended to directly deal with NMR ensembles. We propose this approach, based on the conservation of the inter-residue contacts at the interface, both for the analysis of the interface in whole ensembles of NMR complexes and for the possible selection of a single conformer as the best representative of the overall interface. In order to make the analyses automatic and fast, we made the protocol available as a web tool at: https://www.molnac.unisa.it/BioTools/consrank/consrank-nmr.html. PMID:26968364

  8. Statistical mechanical analysis of a hierarchical random code ensemble in signal processing

    Energy Technology Data Exchange (ETDEWEB)

    Obuchi, Tomoyuki [Department of Earth and Space Science, Faculty of Science, Osaka University, Toyonaka 560-0043 (Japan); Takahashi, Kazutaka [Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551 (Japan); Takeda, Koujin, E-mail: takeda@sp.dis.titech.ac.jp [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8502 (Japan)

    2011-02-25

    We study a random code ensemble with a hierarchical structure, which is closely related to the generalized random energy model with discrete energy values. Based on this correspondence, we analyze the hierarchical random code ensemble by using the replica method in two situations: lossy data compression and channel coding. For both the situations, the exponents of large deviation analysis characterizing the performance of the ensemble, the distortion rate of lossy data compression and the error exponent of channel coding in Gallager's formalism, are accessible by a generating function of the generalized random energy model. We discuss that the transitions of those exponents observed in the preceding work can be interpreted as phase transitions with respect to the replica number. We also show that the replica symmetry breaking plays an essential role in these transitions.

  9. Uncertainty analysis in building ensemble of RCMs, on water cycle in South East of Spain

    Science.gov (United States)

    García Galiano, Sandra; Olmos Giménez, Patricia; Giraldo Osorio, Juan Diego

    2014-05-01

    the influence of seasonal and annual variation of the corresponding variables, and is built at each site. A sensitivity analysis of ensemble building method of meteorological variables, is addressed for justifying the more robust and parsimonious methodology. Finally, the impacts on runoff and its trend from historical data and climate projections from the selected method of RCMs ensemble, were assessed. Significant decreases from the plausible scenarios of runoff for 2050 were identified, with the consequent negative impacts in the regional economy.

  10. Ensemble-trained source apportionment of fine particulate matter and method uncertainty analysis

    Science.gov (United States)

    Balachandran, Sivaraman; Pachon, Jorge E.; Hu, Yongtao; Lee, Dongho; Mulholland, James A.; Russell, Armistead G.

    2012-12-01

    An ensemble-based approach is applied to better estimate source impacts on fine particulate matter (PM2.5) and quantify uncertainties in various source apportionment (SA) methods. The approach combines source impacts from applications of four individual SA methods: three receptor-based models and one chemical transport model (CTM). Receptor models used are the chemical mass balance methods CMB-LGO (Chemical Mass Balance-Lipschitz global optimizer) and CMB-MM (molecular markers) as well as a factor analytic method, Positive Matrix Factorization (PMF). The CTM used is the Community Multiscale Air Quality (CMAQ) model. New source impact estimates and uncertainties in these estimates are calculated in a two-step process. First, an ensemble average is calculated for each source category using results from applying the four individual SA methods. The root mean square error (RMSE) between each method with respect to the average is calculated for each source category; the RMSE is then taken to be the updated uncertainty for each individual SA method. Second, these new uncertainties are used to re-estimate ensemble source impacts and uncertainties. The approach is applied to data from daily PM2.5 measurements at the Atlanta, GA, Jefferson Street (JST) site in July 2001 and January 2002. The procedure provides updated uncertainties for the individual SA methods that are calculated in a consistent way across methods. Overall, the ensemble has lower relative uncertainties as compared to the individual SA methods. Calculated CMB-LGO uncertainties tend to decrease from initial estimates, while PMF and CMB-MM uncertainties increase. Estimated CMAQ source impact uncertainties are comparable to other SA methods for gasoline vehicles and SOC but are larger than other methods for other sources. In addition to providing improved estimates of source impact uncertainties, the ensemble estimates do not have unrealistic extremes as compared to individual SA methods and avoids zero impact

  11. Pathway analysis in attention deficit hyperactivity disorder: An ensemble approach.

    Science.gov (United States)

    Mooney, Michael A; McWeeney, Shannon K; Faraone, Stephen V; Hinney, Anke; Hebebrand, Johannes; Nigg, Joel T; Wilmot, Beth

    2016-09-01

    Despite a wealth of evidence for the role of genetics in attention deficit hyperactivity disorder (ADHD), specific and definitive genetic mechanisms have not been identified. Pathway analyses, a subset of gene-set analyses, extend the knowledge gained from genome-wide association studies (GWAS) by providing functional context for genetic associations. However, there are numerous methods for association testing of gene sets and no real consensus regarding the best approach. The present study applied six pathway analysis methods to identify pathways associated with ADHD in two GWAS datasets from the Psychiatric Genomics Consortium. Methods that utilize genotypes to model pathway-level effects identified more replicable pathway associations than methods using summary statistics. In addition, pathways implicated by more than one method were significantly more likely to replicate. A number of brain-relevant pathways, such as RhoA signaling, glycosaminoglycan biosynthesis, fibroblast growth factor receptor activity, and pathways containing potassium channel genes, were nominally significant by multiple methods in both datasets. These results support previous hypotheses about the role of regulation of neurotransmitter release, neurite outgrowth and axon guidance in contributing to the ADHD phenotype and suggest the value of cross-method convergence in evaluating pathway analysis results. © 2016 Wiley Periodicals, Inc. PMID:27004716

  12. Seeking for the rational basis of the median model: the optimal combination of multi-model ensemble results

    Directory of Open Access Journals (Sweden)

    A. Riccio

    2007-04-01

    Full Text Available In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides.

    We first introduce the theoretical basis (with its roots sinking into the Bayes theorem and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b.

    This approach also provides a way to systematically reduce (and quantify model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  13. An Ensemble 4D Seismic History Matching Framework with Sparse Representation Based on Wavelet Multiresolution Analysis

    CERN Document Server

    Luo, Xiaodong; Jakobsen, Morten; Nævdal, Geir

    2016-01-01

    In this work we propose an ensemble 4D seismic history matching framework for reservoir characterization. Compared to similar existing frameworks in reservoir engineering community, the proposed one consists of some relatively new ingredients, in terms of the type of seismic data in choice, wavelet multiresolution analysis for the chosen seismic data and related data noise estimation, and the use of recently developed iterative ensemble history matching algorithms. Typical seismic data used for history matching, such as acoustic impedance, are inverted quantities, whereas extra uncertainties may arise during the inversion processes. In the proposed framework we avoid such intermediate inversion processes. In addition, we also adopt wavelet-based sparse representation to reduce data size. Concretely, we use intercept and gradient attributes derived from amplitude versus angle (AVA) data, apply multilevel discrete wavelet transforms (DWT) to attribute data, and estimate noise level of resulting wavelet coeffici...

  14. Morphing ensemble Kalman filters

    OpenAIRE

    Beezley, Jonathan D.; Mandel, Jan

    2008-01-01

    A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for non-linear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modelling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration m...

  15. Morphing Ensemble Kalman Filters

    OpenAIRE

    Beezley, Jonathan D.; Mandel, Jan

    2007-01-01

    A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for nonlinear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modeling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration met...

  16. Ensembl 2012.

    Science.gov (United States)

    Flicek, Paul; Amode, M Ridwan; Barrell, Daniel; Beal, Kathryn; Brent, Simon; Carvalho-Silva, Denise; Clapham, Peter; Coates, Guy; Fairley, Susan; Fitzgerald, Stephen; Gil, Laurent; Gordon, Leo; Hendrix, Maurice; Hourlier, Thibaut; Johnson, Nathan; Kähäri, Andreas K; Keefe, Damian; Keenan, Stephen; Kinsella, Rhoda; Komorowska, Monika; Koscielny, Gautier; Kulesha, Eugene; Larsson, Pontus; Longden, Ian; McLaren, William; Muffato, Matthieu; Overduin, Bert; Pignatelli, Miguel; Pritchard, Bethan; Riat, Harpreet Singh; Ritchie, Graham R S; Ruffier, Magali; Schuster, Michael; Sobral, Daniel; Tang, Y Amy; Taylor, Kieron; Trevanion, Stephen; Vandrovcova, Jana; White, Simon; Wilson, Mark; Wilder, Steven P; Aken, Bronwen L; Birney, Ewan; Cunningham, Fiona; Dunham, Ian; Durbin, Richard; Fernández-Suarez, Xosé M; Harrow, Jennifer; Herrero, Javier; Hubbard, Tim J P; Parker, Anne; Proctor, Glenn; Spudich, Giulietta; Vogel, Jan; Yates, Andy; Zadissa, Amonida; Searle, Stephen M J

    2012-01-01

    The Ensembl project (http://www.ensembl.org) provides genome resources for chordate genomes with a particular focus on human genome data as well as data for key model organisms such as mouse, rat and zebrafish. Five additional species were added in the last year including gibbon (Nomascus leucogenys) and Tasmanian devil (Sarcophilus harrisii) bringing the total number of supported species to 61 as of Ensembl release 64 (September 2011). Of these, 55 species appear on the main Ensembl website and six species are provided on the Ensembl preview site (Pre!Ensembl; http://pre.ensembl.org) with preliminary support. The past year has also seen improvements across the project. PMID:22086963

  17. Mesoscale ensemble sensitivity analysis for predictability studies and observing network design in complex terrain

    Science.gov (United States)

    Hacker, Joshua

    2013-04-01

    Ensemble sensitivity analysis (ESA) is emerging as a viable alternative to adjoint sensitivity. Several open issues face ESA for forecasts dominated by mesoscale phenomena, including (1) sampling error arising from finite-sized ensembles causing over-estimated sensitivities, and (2) violation of linearity assumptions for strongly nonlinear flows. In an effort to use ESA for predictability studies and observing network design in complex terrain, we present results from experiments designed to address these open issues. Sampling error in ESA arises in two places. First, when hypothetical observations are introduced to test the sensitivity estimates for linearity. Here the same localization that was used in the filter itself can be simply applied. Second and more critical, localization should be considered within the sensitivity calculations. Sensitivity to hypothetical observations, estimated without re-running the ensemble, includes regression of a sample of a final-time (forecast) metric onto a sample of initial states. Derivation to include localization results in two localization coefficients (or factors) applied in separate regression steps. Because the forecast metric is usually a sum, and can also include a sum over a spatial region and multiple physical variables, a spatial localization function is difficult to specify. We present results from experiments to empirically estimate localization factors for ESA to test hypothetical observations for mesoscale data assimilation in complex terrain. Localization factors are first derived for an ensemble filter following the empirical localization methodology. Sensitivities for a fog event over Salt Lake City, and a Colorado downslope wind event, are tested for linearity by approximating assimilation of perfect observations at points of maximum sensitivity, both with and without localization. Observation sensitivity is then estimated, with and without localization, and tested for linearity. The validity of the

  18. Hybrid Data Assimilation without Ensemble Filtering

    Science.gov (United States)

    Todling, Ricardo; Akkraoui, Amal El

    2014-01-01

    The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.

  19. Ensembl 2012

    OpenAIRE

    Flicek, Paul; Amode, M. Ridwan; Barrell, Daniel; Beal, Kathryn; Brent, Simon; Carvalho-Silva, Denise; Clapham, Peter; Coates, Guy; Fairley, Susan; Fitzgerald, Stephen; Gil, Laurent; Gordon, Leo; Hendrix, Maurice; Hourlier, Thibaut; Johnson, Nathan

    2011-01-01

    The Ensembl project (http://www.ensembl.org) provides genome resources for chordate genomes with a particular focus on human genome data as well as data for key model organisms such as mouse, rat and zebrafish. Five additional species were added in the last year including gibbon (Nomascus leucogenys) and Tasmanian devil (Sarcophilus harrisii) bringing the total number of supported species to 61 as of Ensembl release 64 (September 2011). Of these, 55 species appear on the main Ensembl website ...

  20. A glacial systems model configured for large ensemble analysis of Antarctic deglaciation

    Directory of Open Access Journals (Sweden)

    R. Briggs

    2013-04-01

    Full Text Available This article describes the Memorial University of Newfoundland/Penn State University (MUN/PSU glacial systems model (GSM that has been developed specifically for large-ensemble data-constrained analysis of past Antarctic Ice Sheet evolution. Our approach emphasizes the introduction of a large set of model parameters to explicitly account for the uncertainties inherent in the modelling of such a complex system. At the core of the GSM is a 3-D thermo-mechanically coupled ice sheet model that solves both the shallow ice and shallow shelf approximations. This enables the different stress regimes of ice sheet, ice shelves, and ice streams to be represented. The grounding line is modelled through an analytical sub-grid flux parametrization. To this dynamical core the following have been added: a heavily parametrized basal drag component; a visco-elastic isostatic adjustment solver; a diverse set of climate forcings (to remove any reliance on any single method; tidewater and ice shelf calving functionality; and a new physically-motivated empirically-derived sub-shelf melt (SSM component. To assess the accuracy of the latter, we compare predicted SSM values against a compilation of published observations. Within parametric and observational uncertainties, computed SSM for the present day ice sheet is in accord with observations for all but the Filchner ice shelf. The GSM has 31 ensemble parameters that are varied to account (in part for the uncertainty in the ice-physics, the climate forcing, and the ice-ocean interaction. We document the parameters and parametric sensitivity of the model to motivate the choice of ensemble parameters in a quest to approximately bound reality (within the limits of 31 parameters.

  1. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis

    Science.gov (United States)

    Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.

    2009-01-01

    An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.

  2. Ensemble Data Mining Methods

    Data.gov (United States)

    National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...

  3. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets

    Science.gov (United States)

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-01-01

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal–spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. PMID:26953173

  4. Data-worth analysis through probabilistic collocation-based Ensemble Kalman Filter

    Science.gov (United States)

    Dai, Cheng; Xue, Liang; Zhang, Dongxiao; Guadagnini, Alberto

    2016-09-01

    We propose a new and computationally efficient data-worth analysis and quantification framework keyed to the characterization of target state variables in groundwater systems. We focus on dynamically evolving plumes of dissolved chemicals migrating in randomly heterogeneous aquifers. An accurate prediction of the detailed features of solute plumes requires collecting a substantial amount of data. Otherwise, constraints dictated by the availability of financial resources and ease of access to the aquifer system suggest the importance of assessing the expected value of data before these are actually collected. Data-worth analysis is targeted to the quantification of the impact of new potential measurements on the expected reduction of predictive uncertainty based on a given process model. Integration of the Ensemble Kalman Filter method within a data-worth analysis framework enables us to assess data worth sequentially, which is a key desirable feature for monitoring scheme design in a contaminant transport scenario. However, it is remarkably challenging because of the (typically) high computational cost involved, considering that repeated solutions of the inverse problem are required. As a computationally efficient scheme, we embed in the data-worth analysis framework a modified version of the Probabilistic Collocation Method-based Ensemble Kalman Filter proposed by Zeng et al. (2011) so that we take advantage of the ability to assimilate data sequentially in time through a surrogate model constructed via the polynomial chaos expansion. We illustrate our approach on a set of synthetic scenarios involving solute migrating in a two-dimensional random permeability field. Our results demonstrate the computational efficiency of our approach and its ability to quantify the impact of the design of the monitoring network on the reduction of uncertainty associated with the characterization of a migrating contaminant plume.

  5. Combined assimilation of soil moisture and streamflow data by an ensemble Kalman filter in a coupled model of surface-subsurface flow.

    Science.gov (United States)

    Camporese, M.; Paniconi, C.; Putti, M.; Salandin, P.

    2007-12-01

    Hydrologic models can largely benefit from the use of data assimilation algorithms, which allow to update the modeled system state incorporating in the solution of the model itself information coming from experimental measurements of various quantities, as soon as the data become available. In this context, data assimilation seems to be well fit for coupled surface--subsurface models, which, considering the watershed as the ensemble of surface and subsurface domains, allow a more accurate description of the hydrological processes at the catchment scale, where soil moisture largely influences the partitioning of rain between runoff and infiltration and thus controls the flow at the outlet. The need for a better determination of the variables of interest (streamflow at the outlet section, water table, soil water content, etc.) has led to a many efforts focused on the development of coupled numerical models, together with field and laboratory observations. Nevertheless, uncertainty in the schematic description of physical processes and inaccuracies on source data collection induce errors in the model predictions. The ensemble Kalman filter (EnKF) represents an extension to nonlinear problems of the classic Kalman filter by means of a Monte Carlo approach. A sequential assimilation procedure based on EnKF is developed and integrated in a process-based numerical model, which couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on a digital elevation data for surface water dynamics. A detailed analysis of the data assimilation algorithm behavior within the coupled model has been carried out on a synthetic 1D test case in order to verify the correct implementation and derive a series of fundamental parameters, such as the minimum ensemble size that can ensure a sufficient accuracy in the statistical estimates. The assimilation frequency, as well as the effects

  6. Extracting the Neural Representation of Tone Onsets for Separate Voices of Ensemble Music Using Multivariate EEG Analysis

    DEFF Research Database (Denmark)

    Sturm, Irene; Treder, Matthias S.; Miklody, Daniel;

    2015-01-01

    When listening to ensemble music even non-musicians can follow single instruments effortlessly. Electrophysiological indices for neural sensory encoding of separate streams have been described using oddball paradigms which utilize brain reactions to sound events that deviate from a repeating...... standard pattern. Obviously, these paradigms put constraints on the compositional complexity of the musical stimulus. Here, we apply a regression-based method of multivariate EEG analysis in order to reveal the neural encoding of separate voices of naturalistic ensemble music that is based on cortical...... responses to tone onsets, such as N1/P2 ERP components. Music clips (resembling minimalistic electro-pop) were presented to 11 subjects, either in an ensemble version (drums, bass, keyboard) or in the corresponding three solo versions. For each instrument we train a spatio-temporal regression filter that...

  7. Ensemble clustering in deterministic ensemble Kalman filters

    Directory of Open Access Journals (Sweden)

    Javier Amezcua

    2012-07-01

    Full Text Available Ensemble clustering (EC can arise in data assimilation with ensemble square root filters (EnSRFs using non-linear models: an M-member ensemble splits into a single outlier and a cluster of M–1 members. The stochastic Ensemble Kalman Filter does not present this problem. Modifications to the EnSRFs by a periodic resampling of the ensemble through random rotations have been proposed to address it. We introduce a metric to quantify the presence of EC and present evidence to dispel the notion that EC leads to filter failure. Starting from a univariate model, we show that EC is not a permanent but transient phenomenon; it occurs intermittently in non-linear models. We perform a series of data assimilation experiments using a standard EnSRF and a modified EnSRF by a resampling though random rotations. The modified EnSRF thus alleviates issues associated with EC at the cost of traceability of individual ensemble trajectories and cannot use some of algorithms that enhance performance of standard EnSRF. In the non-linear regimes of low-dimensional models, the analysis root mean square error of the standard EnSRF slowly grows with ensemble size if the size is larger than the dimension of the model state. However, we do not observe this problem in a more complex model that uses an ensemble size much smaller than the dimension of the model state, along with inflation and localisation. Overall, we find that transient EC does not handicap the performance of the standard EnSRF.

  8. NYYD Ensemble

    Index Scriptorium Estoniae

    2002-01-01

    NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx

  9. Ensemble habitat mapping of invasive plant species

    Science.gov (United States)

    Stohlgren, T.J.; Ma, P.; Kumar, S.; Rocca, M.; Morisette, J.T.; Jarnevich, C.S.; Benson, N.

    2010-01-01

    Ensemble species distribution models combine the strengths of several species environmental matching models, while minimizing the weakness of any one model. Ensemble models may be particularly useful in risk analysis of recently arrived, harmful invasive species because species may not yet have spread to all suitable habitats, leaving species-environment relationships difficult to determine. We tested five individual models (logistic regression, boosted regression trees, random forest, multivariate adaptive regression splines (MARS), and maximum entropy model or Maxent) and ensemble modeling for selected nonnative plant species in Yellowstone and Grand Teton National Parks, Wyoming; Sequoia and Kings Canyon National Parks, California, and areas of interior Alaska. The models are based on field data provided by the park staffs, combined with topographic, climatic, and vegetation predictors derived from satellite data. For the four invasive plant species tested, ensemble models were the only models that ranked in the top three models for both field validation and test data. Ensemble models may be more robust than individual species-environment matching models for risk analysis. ?? 2010 Society for Risk Analysis.

  10. Bayesian ensemble refinement by replica simulations and reweighting

    Science.gov (United States)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  11. Removal of artifacts in knee joint vibroarthrographic signals using ensemble empirical mode decomposition and detrended fluctuation analysis

    International Nuclear Information System (INIS)

    High-resolution knee joint vibroarthrographic (VAG) signals can help physicians accurately evaluate the pathological condition of a degenerative knee joint, in order to prevent unnecessary exploratory surgery. Artifact cancellation is vital to preserve the quality of VAG signals prior to further computer-aided analysis. This paper describes a novel method that effectively utilizes ensemble empirical mode decomposition (EEMD) and detrended fluctuation analysis (DFA) algorithms for the removal of baseline wander and white noise in VAG signal processing. The EEMD method first successively decomposes the raw VAG signal into a set of intrinsic mode functions (IMFs) with fast and low oscillations, until the monotonic baseline wander remains in the last residue. Then, the DFA algorithm is applied to compute the fractal scaling index parameter for each IMF, in order to identify the anti-correlation and the long-range correlation components. Next, the DFA algorithm can be used to identify the anti-correlated and the long-range correlated IMFs, which assists in reconstructing the artifact-reduced VAG signals. Our experimental results showed that the combination of EEMD and DFA algorithms was able to provide averaged signal-to-noise ratio (SNR) values of 20.52 dB (standard deviation: 1.14 dB) and 20.87 dB (standard deviation: 1.89 dB) for 45 normal signals in healthy subjects and 20 pathological signals in symptomatic patients, respectively. The combination of EEMD and DFA algorithms can ameliorate the quality of VAG signals with great SNR improvements over the raw signal, and the results were also superior to those achieved by wavelet matching pursuit decomposition and time-delay neural filter. (paper)

  12. MVL spatiotemporal analysis for model intercomparison in EPS: application to the DEMETER multi-model ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, J.; Cofino, A.S. [University of Cantabria, Department of Applied Mathematics and Computing Sciences, Santander (Spain); Primo, C. [European Centre for Medium-Range Weather Forecasts, Reading (United Kingdom); Gutierrez, J.M.; Rodriguez, M.A. [Instituto de Fisica de Cantabria, CSIC-UC, Santander (Spain)

    2009-08-15

    In a recent paper, Gutierrez et al. (Nonlinear Process Geophys 15(1):109-114, 2008) introduced a new characterization of spatiotemporal error growth - the so called mean-variance logarithmic (MVL) diagram - and applied it to study ensemble prediction systems (EPS); in particular, they analyzed single-model ensembles obtained by perturbing the initial conditions. In the present work, the MVL diagram is applied to multi-model ensembles analyzing also the effect of model formulation differences. To this aim, the MVL diagram is systematically applied to the multi-model ensemble produced in the EU-funded DEMETER project. It is shown that the shared building blocks (atmospheric and ocean components) impose similar dynamics among different models and, thus, contribute to poorly sampling the model formulation uncertainty. This dynamical similarity should be taken into account, at least as a pre-screening process, before applying any objective weighting method. (orig.)

  13. Combination Clustering Analysis Method and its Application

    OpenAIRE

    Bang-Chun Wen; Li-Yuan Dong; Qin-Liang Li; Yang Liu

    2013-01-01

    The traditional clustering analysis method can not automatically determine the optimal clustering number. In this study, we provided a new clustering analysis method which is combination clustering analysis method to solve this problem. Through analyzed 25 kinds of automobile data samples by combination clustering analysis method, the correctness of the analysis result was verified. It showed that combination clustering analysis method could objectively determine the number of clustering firs...

  14. Sea surface temperature predictions using a multi-ocean analysis ensemble scheme

    Science.gov (United States)

    Zhang, Ying; Zhu, Jieshun; Li, Zhongxian; Chen, Haishan; Zeng, Gang

    2016-04-01

    This study examined the global sea surface temperature (SST) predictions by a so-called multiple-ocean analysis ensemble (MAE) initialization method which was applied in the National Centers for Environmental Prediction (NCEP) Climate Forecast System Version 2 (CFSv2). Different from most operational climate prediction practices which are initialized by a specific ocean analysis system, the MAE method is based on multiple ocean analyses. In the paper, the MAE method was first justified by analyzing the ocean temperature variability in four ocean analyses which all are/were applied for operational climate predictions either at the European Centre for Medium-range Weather Forecasts or at NCEP. It was found that these systems exhibit substantial uncertainties in estimating the ocean states, especially at the deep layers. Further, a set of MAE hindcasts was conducted based on the four ocean analyses with CFSv2, starting from each April during 1982-2007. The MAE hindcasts were verified against a subset of hindcasts from the NCEP CFS Reanalysis and Reforecast (CFSRR) Project. Comparisons suggested that MAE shows better SST predictions than CFSRR over most regions where ocean dynamics plays a vital role in SST evolutions, such as the El Niño and Atlantic Niño regions. Furthermore, significant improvements were also found in summer precipitation predictions over the equatorial eastern Pacific and Atlantic oceans, for which the local SST prediction improvements should be responsible. The prediction improvements by MAE imply a problem for most current climate predictions which are based on a specific ocean analysis system. That is, their predictions would drift towards states biased by errors inherent in their ocean initialization system, and thus have large prediction errors. In contrast, MAE arguably has an advantage by sampling such structural uncertainties, and could efficiently cancel these errors out in their predictions.

  15. Analysis of the PDFs of temperature from a multi-physics ensemble of climate change projections over the Iberian Peninsula

    Science.gov (United States)

    Jerez, Sonia; Montavez, Juan P.; Gomez-Navarro, Juan J.; Jimenez-Guerrero, Pedro; Lorente, Raquel; Garcia-Valero, Juan A.; Jimenez, Pedro A.; Gonzalez-Rouco, Jose F.; Zorita, Eduardo

    2010-05-01

    Regional climate change projections are affected by several sources of uncertainty. Some of them come from Global Circulation Models and scenarios.; others come from the downscaling process. In the case of dynamical downscaling, mainly using Regional Climate Models (RCM), the sources of uncertainty may involve nesting strategies, related to the domain position and resolution, soil characterization, internal variability, methods of solving the equations, and the configuration of model physics. Therefore, a probabilistic approach seems to be recommendable when projecting regional climate change. This problem is usually faced by performing an ensemble of simulations. The aim of this study is to evaluate the range of uncertainty in regional climate projections associated to changing the physical configuration in a RCM (MM5) as well as the capability when reproducing the observed climate. This study is performed over the Iberian Peninsula and focuses on the reproduction of the Probability Density Functions (PDFs) of daily mean temperature. The experiments consist on a multi-physics ensemble of high resolution climate simulations (30 km over the target region) for the periods 1970-1999 (present) and 2070-2099 (future). Two sets of simulations for the present have been performed using ERA40 (MM5-ERA40) and ECHAM5-3CM run1 (MM5-E5-PR) as boundary conditions. The future the experiments are driven by ECHAM5-A2-run1 (MM5-E5-A2). The ensemble has a total of eight members, as the result of combining the schemes for PBL (MRF and ETA), cumulus (GRELL and Kain-Fritch) and microphysics (Simple-Ice and Mixed phase). In a previous work this multi-physics ensemble has been analyzed focusing on the seasonal mean values of both temperature and precipitation. The main results indicate that those physics configurations that better reproduce the observed climate project the most dramatic changes for the future (i.e, the largest temperature increase and precipitation decrease). Among the

  16. Statistical analysis of time-resolved emission from ensembles of semiconductor quantum dots: Interpretation of exponential decay models

    OpenAIRE

    van Driel, A. F.; Nikolaev, I. S.; Vergeer, P.; Lodahl, Peter; Vanmaelkelbergh, D.; Vos, W.L.

    2007-01-01

    We present a statistical analysis of time-resolved spontaneous emission decay curves from ensembles of emitters, such as semiconductor quantum dots, with the aim to interpret ubiquitous non-single-exponential decay. Contrary to what is widely assumed, the density of excited emitters and the intensity in an emission decay curve are not proportional, but the density is a time-integral of the intensity. The integral relation is crucial to correctly interpret non-single-exponential decay. We deri...

  17. MAVENs: Motion analysis and visualization of elastic networks and structural ensembles

    Directory of Open Access Journals (Sweden)

    Zimmermann Michael T

    2011-06-01

    Full Text Available Abstract Background The ability to generate, visualize, and analyze motions of biomolecules has made a significant impact upon modern biology. Molecular Dynamics has gained substantial use, but remains computationally demanding and difficult to setup for many biologists. Elastic network models (ENMs are an alternative and have been shown to generate the dominant equilibrium motions of biomolecules quickly and efficiently. These dominant motions have been shown to be functionally relevant and also to indicate the likely direction of conformational changes. Most structures have a small number of dominant motions. Comparing computed motions to the structure's conformational ensemble derived from a collection of static structures or frames from an MD trajectory is an important way to understand functional motions as well as evaluate the models. Modes of motion computed from ENMs can be visualized to gain functional and mechanistic understanding and to compute useful quantities such as average positional fluctuations, internal distance changes, collectiveness of motions, and directional correlations within the structure. Results Our new software, MAVEN, aims to bring ENMs and their analysis to a broader audience by integrating methods for their generation and analysis into a user friendly environment that automates many of the steps. Models can be constructed from raw PDB files or density maps, using all available atomic coordinates or by employing various coarse-graining procedures. Visualization can be performed either with our software or exported to molecular viewers. Mixed resolution models allow one to study atomic effects on the system while retaining much of the computational speed of the coarse-grained ENMs. Analysis options are available to further aid the user in understanding the computed motions and their importance for its function. Conclusion MAVEN has been developed to simplify ENM generation, allow for diverse models to be used, and

  18. An Introduction to Ensemble Methods for Data Analysis (Revised July, 2004)

    OpenAIRE

    Berk, Richard

    2004-01-01

    This paper provides an introduction to ensemble statistical procedures as a special case of algorithmic methods. The discussion beings with classification and regression trees (CART) as a didactic device to introduce many of the key issues. Following the material on CART is a consideration of cross-validation, bagging, random forests and boosting. Major points are illustrated with analyses of real data.

  19. Ensemble learning incorporating uncertain registration.

    Science.gov (United States)

    Simpson, Ivor J A; Woolrich, Mark W; Andersson, Jesper L R; Groves, Adrian R; Schnabel, Julia A

    2013-04-01

    This paper proposes a novel approach for improving the accuracy of statistical prediction methods in spatially normalized analysis. This is achieved by incorporating registration uncertainty into an ensemble learning scheme. A probabilistic registration method is used to estimate a distribution of probable mappings between subject and atlas space. This allows the estimation of the distribution of spatially normalized feature data, e.g., grey matter probability maps. From this distribution, samples are drawn for use as training examples. This allows the creation of multiple predictors, which are subsequently combined using an ensemble learning approach. Furthermore, extra testing samples can be generated to measure the uncertainty of prediction. This is applied to separating subjects with Alzheimer's disease from normal controls using a linear support vector machine on a region of interest in magnetic resonance images of the brain. We show that our proposed method leads to an improvement in discrimination using voxel-based morphometry and deformation tensor-based morphometry over bootstrap aggregating, a common ensemble learning framework. The proposed approach also generates more reasonable soft-classification predictions than bootstrap aggregating. We expect that this approach could be applied to other statistical prediction tasks where registration is important. PMID:23288332

  20. Algorithms for the Analysis of Ensemble Neural Spiking Activity Using Simultaneous-Event Multivariate Point-Process Models

    Directory of Open Access Journals (Sweden)

    Simona Temereanca

    2014-02-01

    Full Text Available Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM. Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the one millisecond time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a

  1. A Classifier Ensemble of Binary Classifier Ensembles

    Directory of Open Access Journals (Sweden)

    Sajad Parvin

    2011-09-01

    Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles

  2. Measuring ensemble interdependence in a string quartet through analysis of multidimensional performance data.

    Science.gov (United States)

    Papiotis, Panos; Marchini, Marco; Perez-Carrillo, Alfonso; Maestre, Esteban

    2014-01-01

    In a musical ensemble such as a string quartet, the musicians interact and influence each other's actions in several aspects of the performance simultaneously in order to achieve a common aesthetic goal. In this article, we present and evaluate a computational approach for measuring the degree to which these interactions exist in a given performance. We recorded a number of string quartet exercises under two experimental conditions (solo and ensemble), acquiring both audio and bowing motion data. Numerical features in the form of time series were extracted from the data as performance descriptors representative of four distinct dimensions of the performance: Intonation, Dynamics, Timbre, and Tempo. Four different interdependence estimation methods (two linear and two nonlinear) were applied to the extracted features in order to assess the overall level of interdependence between the four musicians. The obtained results suggest that it is possible to correctly discriminate between the two experimental conditions by quantifying interdependence between the musicians in each of the studied performance dimensions; the nonlinear methods appear to perform best for most of the numerical features tested. Moreover, by using the solo recordings as a reference to which the ensemble recordings are contrasted, it is feasible to compare the amount of interdependence that is established between the musicians in a given performance dimension across all exercises, and relate the results to the underlying goal of the exercise. We discuss our findings in the context of ensemble performance research, the current limitations of our approach, and the ways in which it can be expanded and consolidated. PMID:25228894

  3. Measuring ensemble interdependence in a string quartet through analysis of multidimensional performance data

    Directory of Open Access Journals (Sweden)

    Panos ePapiotis

    2014-09-01

    Full Text Available In a musical ensemble such as a string quartet, the musicians interact and influence each other’s actions in several aspects of the performance simultaneously in order to achieve a common aesthetic goal. In this article, we present and evaluate a computational approach for measuring the degree to which these interactions exist in a given performance. We recorded a number of string quartet exercises under two experimental conditions (solo and ensemble, acquiring both audio and bowing motion data. Numerical features in the form of time series were extracted from the data as performance descriptors representative of four distinct dimensions of the performance: Intonation, Dynamics, Timbre and Tempo. Four different interdependence estimation methods (two linear and two nonlinear were applied to the extracted features in order to assess the overall level of interdependence between the four musicians. The obtained results suggest that it is possible to correctly discriminate between the two experimental conditions by quantifying interdependence between the musicians in each of the studied performance dimensions; the nonlinear methods appear to perform best for most of the numerical features tested. Moreover, by using the solo recordings as a reference to which the ensemble recordings are contrasted, it is feasible to compare the amount of interdependence that is established between the musicians in a given performance dimension across all exercises, and relate the results to the underlying goal of the exercise. We discuss our findings in the context of ensemble performance research, the current limitations of our approach, and the ways in which it can be expanded and consolidated.

  4. Convergence analysis of combinations of different methods

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Y. [Clarkson Univ., Potsdam, NY (United States)

    1994-12-31

    This paper provides a convergence analysis for combinations of different numerical methods for solving systems of differential equations. The author proves that combinations of two convergent linear multistep methods or Runge-Kutta methods produce a new convergent method of which the order is equal to the smaller order of the two original methods.

  5. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  6. Effective screening strategy using ensembled pharmacophore models combined with cascade docking: application to p53-MDM2 interaction inhibitors.

    Science.gov (United States)

    Xue, Xin; Wei, Jin-Lian; Xu, Li-Li; Xi, Mei-Yang; Xu, Xiao-Li; Liu, Fang; Guo, Xiao-Ke; Wang, Lei; Zhang, Xiao-Jin; Zhang, Ming-Ye; Lu, Meng-Chen; Sun, Hao-Peng; You, Qi-Dong

    2013-10-28

    Protein-protein interactions (PPIs) play a crucial role in cellular function and form the backbone of almost all biochemical processes. In recent years, protein-protein interaction inhibitors (PPIIs) have represented a treasure trove of potential new drug targets. Unfortunately, there are few successful drugs of PPIIs on the market. Structure-based pharmacophore (SBP) combined with docking has been demonstrated as a useful Virtual Screening (VS) strategy in drug development projects. However, the combination of target complexity and poor binding affinity prediction has thwarted the application of this strategy in the discovery of PPIIs. Here we report an effective VS strategy on p53-MDM2 PPI. First, we built a SBP model based on p53-MDM2 complex cocrystal structures. The model was then simplified by using a Receptor-Ligand complex-based pharmacophore model considering the critical binding features between MDM2 and its small molecular inhibitors. Cascade docking was subsequently applied to improve the hit rate. Based on this strategy, we performed VS on NCI and SPECS databases and successfully discovered 6 novel compounds from 15 hits with the best, compound 1 (NSC 5359), K(i) = 180 ± 50 nM. These compounds can serve as lead compounds for further optimization. PMID:24050442

  7. A Flexible Approach for the Statistical Visualization of Ensemble Data

    Energy Technology Data Exchange (ETDEWEB)

    Potter, K. [Univ. of Utah, Salt Lake City, UT (United States). SCI Institute; Wilson, A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bremer, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pascucci, V. [Univ. of Utah, Salt Lake City, UT (United States). SCI Institute; Johnson, C. [Univ. of Utah, Salt Lake City, UT (United States). SCI Institute

    2009-09-29

    Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methods that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.

  8. Statistical analysis of time-resolved emission from ensembles of semiconductor quantum dots: Interpretation of exponential decay models

    DEFF Research Database (Denmark)

    Van Driel, A.F.; Nikolaev, I.S.; Vergeer, P.; Lodahl, Peter; Vanmaelkelbergh, D.; Vos, W.L.

    2007-01-01

    We present a statistical analysis of time-resolved spontaneous emission decay curves from ensembles of emitters, such as semiconductor quantum dots, with the aim of interpreting ubiquitous non-single-exponential decay. Contrary to what is widely assumed, the density of excited emitters and the...... intensity in an emission decay curve are not proportional, but the density is a time integral of the intensity. The integral relation is crucial to correctly interpret non-single-exponential decay. We derive the proper normalization for both a discrete and a continuous distribution of rates, where every...... single number, but is also distributed. We derive a practical description of non-single-exponential emission decay curves in terms of a single distribution of decay rates; the resulting distribution is identified as the distribution of total decay rates weighted with the radiative rates. We apply our...

  9. Spin–Orbit Alignment of Exoplanet Systems: Ensemble Analysis Using Asteroseismology

    DEFF Research Database (Denmark)

    Campante, T. L.; Lund, M. N.; Kuszlewicz, James S.;

    2016-01-01

    The angle ψ between a planet’s orbital axis and the spin axis of its parent star is an important diagnostic of planet formation, migration, and tidal evolution. We seek empirical constraints on ψ by measuring the stellar inclination i s via asteroseismology for an ensemble of 25 solar-type hosts...... observed with NASA’s Kepler satellite. Our results for i s are consistent with alignment at the 2 σ level for all stars in the sample, meaning that the system surrounding the red-giant star Kepler-56 remains as the only unambiguous misaligned multiple-planet system detected to date. The availability of a...... measurement of the projected spin–orbit angle λ for two of the systems allows us to estimate ψ . We find that the orbit of the hot Jupiter HAT-P-7b is likely to be retrograde ( ##IMG## [http://ej.iop.org/images/0004-637X/819/1/85/apj522683ieqn1.gif] $psi =116rc. 4_-14.7^+30.2$ ), whereas that of Kepler-25c...

  10. Spin-orbit alignment of exoplanet systems: ensemble analysis using asteroseismology

    CERN Document Server

    Campante, T L; Kuszlewicz, J S; Davies, G R; Chaplin, W J; Albrecht, S; Winn, J N; Bedding, T R; Benomar, O; Bossini, D; Handberg, R; Santos, A R G; Van Eylen, V; Basu, S; Christensen-Dalsgaard, J; Elsworth, Y P; Hekker, S; Hirano, T; Huber, D; Karoff, C; Kjeldsen, H; Lundkvist, M S; North, T S H; Aguirre, V Silva; Stello, D; White, T R

    2016-01-01

    The angle $\\psi$ between a planet's orbital axis and the spin axis of its parent star is an important diagnostic of planet formation, migration, and tidal evolution. We seek empirical constraints on $\\psi$ by measuring the stellar inclination $i_{\\rm s}$ via asteroseismology for an ensemble of 25 solar-type hosts observed with NASA's Kepler satellite. Our results for $i_{\\rm s}$ are consistent with alignment at the 2-$\\sigma$ level for all stars in the sample, meaning that the system surrounding the red-giant star Kepler-56 remains as the only unambiguous misaligned multiple-planet system detected to date. The availability of a measurement of the projected spin-orbit angle $\\lambda$ for two of the systems allows us to estimate $\\psi$. We find that the orbit of the hot-Jupiter HAT-P-7b is likely to be retrograde ($\\psi=116.4^{+30.2}_{-14.7}\\:{\\rm deg}$), whereas that of Kepler-25c seems to be well aligned with the stellar spin axis ($\\psi=12.6^{+6.7}_{-11.0}\\:{\\rm deg}$). While the latter result is in apparent...

  11. Hierarchical Bayes Ensemble Kalman Filtering

    CERN Document Server

    Tsyrulnikov, Michael

    2015-01-01

    Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...

  12. Ensemble algorithms in reinforcement learning

    NARCIS (Netherlands)

    Wiering, Marco A; van Hasselt, Hado

    2008-01-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and imple

  13. Ensemble algorithms in reinforcement learning.

    Science.gov (United States)

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  14. Analysis of ocean internal waves imaged by multichannel reflection seismics, using ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Research on ocean internal waves using seismic oceanography is a frontier issue both for marine geophysicists and physical oceanographers. Images of the ocean water layer obtained by conventional processing of multichannel seismic reflection data can show the overall patterns of internal waves. However, in order to extract more information from the seismic data, new tools need to be developed. Here, we use the ensemble empirical mode decomposition (EEMD) method to decompose vertical displacement data from seismic sections and apply this method to a seismic section from the northeastern South China Sea, where clear internal waves are observed. Compared with the conventional empirical mode decomposition method, EEMD has greatly reduced the scale mixing problems induced in the decomposition results. The results obtained show that the internal waves in this area are composed of different characteristic wavelengths at different depths. The depth range of 200–1050 m contains internal waves with a wavelength of 1.25 km that are very well coupled in the vertical direction. The internal waves with a wavelength of 3 km, in the depth range of 200–600 m, are also well coupled, but in an oblique direction; this suggests that the propagation speed of internal waves of this scale changes with depth in this area. Finally, the internal waves with a wavelength of 6.5 km, observed in the depth range of 200–800 m, are separated into two parts with a phase difference of about 90°, by a clear interface at a depth of 650 m; this allows us to infer an oblique propagation of wave energy of this scale. (paper)

  15. Comparative Analysis of Upper Ocean Heat Content Variability from Ensemble Operational Ocean Analyses

    Science.gov (United States)

    Xue, Yan; Balmaseda, Magdalena A.; Boyer, Tim; Ferry, Nicolas; Good, Simon; Ishikawa, Ichiro; Rienecker, Michele; Rosati, Tony; Yin, Yonghong; Kumar, Arun

    2012-01-01

    Upper ocean heat content (HC) is one of the key indicators of climate variability on many time-scales extending from seasonal to interannual to long-term climate trends. For example, HC in the tropical Pacific provides information on thermocline anomalies that is critical for the longlead forecast skill of ENSO. Since HC variability is also associated with SST variability, a better understanding and monitoring of HC variability can help us understand and forecast SST variability associated with ENSO and other modes such as Indian Ocean Dipole (IOD), Pacific Decadal Oscillation (PDO), Tropical Atlantic Variability (TAV) and Atlantic Multidecadal Oscillation (AMO). An accurate ocean initialization of HC anomalies in coupled climate models could also contribute to skill in decadal climate prediction. Errors, and/or uncertainties, in the estimation of HC variability can be affected by many factors including uncertainties in surface forcings, ocean model biases, and deficiencies in data assimilation schemes. Changes in observing systems can also leave an imprint on the estimated variability. The availability of multiple operational ocean analyses (ORA) that are routinely produced by operational and research centers around the world provides an opportunity to assess uncertainties in HC analyses, to help identify gaps in observing systems as they impact the quality of ORAs and therefore climate model forecasts. A comparison of ORAs also gives an opportunity to identify deficiencies in data assimilation schemes, and can be used as a basis for development of real-time multi-model ensemble HC monitoring products. The OceanObs09 Conference called for an intercomparison of ORAs and use of ORAs for global ocean monitoring. As a follow up, we intercompared HC variations from ten ORAs -- two objective analyses based on in-situ data only and eight model analyses based on ocean data assimilation systems. The mean, annual cycle, interannual variability and longterm trend of HC have

  16. Energy Analysis in Combined Reforming of Propane

    Directory of Open Access Journals (Sweden)

    K. Moon

    2013-01-01

    Full Text Available Combined (steam and CO2 reforming is one of the methods to produce syngas for different applications. An energy requirement analysis of steam reforming to dry reforming with intermediate steps of steam reduction and equivalent CO2 addition to the feed fuel for syngas generation has been done to identify condition for optimum process operation. Thermodynamic equilibrium data for combined reforming was generated for temperature range of 400–1000°C at 1 bar pressure and combined oxidant (CO2 + H2O stream to propane (fuel ratio of 3, 6, and 9 by employing the Gibbs free energy minimization algorithm of HSC Chemistry software 5.1. Total energy requirement including preheating and reaction enthalpy calculations were done using the equilibrium product composition. Carbon and methane formation was significantly reduced in combined reforming than pure dry reforming, while the energy requirements were lower than pure steam reforming. Temperatures of minimum energy requirement were found in the data analysis of combined reforming which were optimum for the process.

  17. Fault Early Diagnosis of Rolling Element Bearings Combining Wavelet Filtering and Degree of Cyclostationarity Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHOU Fu-chang; CHEN Jin; HE Jun; BI Guo; LI Fu-cai; ZHANG Gui-cai

    2005-01-01

    The vibration signals of rolling element bearing are produced by a combination of periodic and random processes due to the machine's rotation cycle and interaction with the real world. The combination of such components can give rise to signals, which have periodically time-varying ensemble statistical and are best considered as cyclostationary. When the early fault occurs, the background noise is very heavy, it is difficult to disclose the latent periodic components successfully using cyclostationary analysis alone. In this paper the degree of cyclostationarity is combined with wavelet filtering for detection of rolling element bearing early faults. Using the proposed entropy minimization rule. The parameters of the wavelet filter are optimized. This method is shown to be effective in detecting rolling element bearing early fault when cyclostationary analysis by itself fails.

  18. A Selective Fuzzy Clustering Ensemble Algorithm

    OpenAIRE

    Kai Li; Peng Li

    2013-01-01

    To improve the performance of clustering ensemble method, a selective fuzzy clustering ensemble algorithm is proposed. It mainly includes selection of clustering ensemble members and combination of clustering results. In the process of member selection, measure method is defined to select the better clustering members. Then some selected clustering members are viewed as hyper-graph in order to select the more influential hyper-edges (or features) and to weight the selected features. For proce...

  19. CINR difference analysis of optimal combining versus maximal ratio combining

    OpenAIRE

    Burke, J. P.; Zeidler, J R; Rao, B D

    2005-01-01

    The statistical gain differences between two common spatial combining algorithms: optimum combining (OC) and maximal ratio combining (MRC) are analyzed using a gain ratio method. Using the receive carrier-to-interference plus noise ratio (CINR), the gain ratio CINROC/CINRMRC is evaluated in a flat Rayeligh fading communications system with multiple interferers. Exact analytical solutions are derived for the probability density function (PDF) and the average gain ratio with one interferer. Whe...

  20. Analysis of fractals with combined partition

    Science.gov (United States)

    Dedovich, T. G.; Tokarev, M. V.

    2016-03-01

    The space—time properties in the general theory of relativity, as well as the discreteness and non-Archimedean property of space in the quantum theory of gravitation, are discussed. It is emphasized that the properties of bodies in non-Archimedean spaces coincide with the properties of the field of P-adic numbers and fractals. It is suggested that parton showers, used for describing interactions between particles and nuclei at high energies, have a fractal structure. A mechanism of fractal formation with combined partition is considered. The modified SePaC method is offered for the analysis of such fractals. The BC, PaC, and SePaC methods for determining a fractal dimension and other fractal characteristics (numbers of levels and values of a base of forming a fractal) are considered. It is found that the SePaC method has advantages for the analysis of fractals with combined partition.

  1. A mollified Ensemble Kalman filter

    CERN Document Server

    Bergemann, Kay

    2010-01-01

    It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various methods have been devised to continuously spread out the analysis increments over a fixed time interval centered about analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.

  2. Combination of different types of ensembles for the adaptive simulation of probabilistic flood forecasts: hindcasts for the Mulde 2002 extreme event

    Directory of Open Access Journals (Sweden)

    J. Dietrich

    2008-03-01

    Full Text Available Flood forecasts are essential to issue reliable flood warnings and to initiate flood control measures on time. The accuracy and the lead time of the predictions for head waters primarily depend on the meteorological forecasts. Ensemble forecasts are a means of framing the uncertainty of the potential future development of the hydro-meteorological situation.

    This contribution presents a flood management strategy based on probabilistic hydrological forecasts driven by operational meteorological ensemble prediction systems. The meteorological ensemble forecasts are transformed into discharge ensemble forecasts by a rainfall-runoff model. Exceedance probabilities for critical discharge values and probabilistic maps of inundation areas can be computed and presented to decision makers. These results can support decision makers in issuing flood alerts. The flood management system integrates ensemble forecasts with different spatial resolution and different lead times. The hydrological models are controlled in an adaptive way, mainly depending on the lead time of the forecast, the expected magnitude of the flood event and the availability of measured data.

    The aforementioned flood forecast techniques have been applied to a case study. The Mulde River Basin (South-Eastern Germany, Czech Republic has often been affected by severe flood events including local flash floods. Hindcasts for the large scale extreme flood in August 2002 have been computed using meteorological predictions from both the COSMO-LEPS ensemble prediction system and the deterministic COSMO-DE local model. The temporal evolution of a the meteorological forecast uncertainty and b the probability of exceeding flood alert levels is discussed. Results from the hindcast simulations demonstrate, that the systems would have predicted a high probability of an extreme flood event, if they would already have been operational in 2002. COSMO-LEPS showed a reasonably good

  3. Combined XRF and PIXE analysis of flour

    International Nuclear Information System (INIS)

    Combined X-Ray Fluorescence (XRF) and Proton Induced X-ray Emission (PIXE) techniques were used for the determination of trace and minor elements in two different samples of flour purchased at the local market. The significance of some of the elements found in the samples is discussed from the viewpoint of nutrition. It is also shown that XRF can be a useful complementary technique for PIXE analysis of flour

  4. The Fukushima-137Cs deposition case study: properties of the multi-model ensemble

    International Nuclear Information System (INIS)

    In this paper we analyse the properties of an eighteen-member ensemble generated by the combination of five atmospheric dispersion modelling systems and six meteorological data sets. The models have been applied to the total deposition of 137Cs, following the nuclear accident of the Fukushima power plant in March 2011. Analysis is carried out with the scope of determining whether the ensemble is reliable, sufficiently diverse and if its accuracy and precision can be improved. Although ensemble practice is becoming more and more popular in many geophysical applications, good practice guidelines are missing as to how models should be combined for the ensembles to offer an improvement over single model realisations. We show that the ensemble of models share large portions of bias and variance and make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble mean with the advantage of being poorly correlated, allowing to save computational resources and reduce noise (and thus improving accuracy). We further propose and discuss two methods for selecting subsets of skilful and diverse members, and prove that, in the contingency of the present analysis, their mean outscores the full ensemble mean in terms of both accuracy (error) and precision (variance)

  5. Fast Fourier Transform Ensemble Kalman Filter with Application to a Coupled Atmosphere-Wildland Fire Model

    CERN Document Server

    Mandel, Jan; Kondratenko, Volodymyr Y

    2010-01-01

    We propose a new type of the Ensemble Kalman Filter (EnKF), which uses the Fast Fourier Transform (FFT) for covariance estimation from a very small ensemble with automatic tapering, and for a fast computation of the analysis ensemble by convolution, avoiding the need to solve a sparse system with the tapered matrix. The FFT EnKF is combined with the morphing EnKF to enable the correction of position errors, in addition to amplitude errors, and demonstrated on WRF-Fire, the Weather Research Forecasting (WRF) model coupled with a fire spread model implemented by the level set method.

  6. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.

  7. Generation of scenarios from calibrated ensemble forecasts with a dynamic ensemble copula coupling approach

    CERN Document Server

    Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre

    2015-01-01

    Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...

  8. Measuring social interaction in music ensembles.

    Science.gov (United States)

    Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano

    2016-05-01

    Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. PMID:27069054

  9. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    International Nuclear Information System (INIS)

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  10. Thermodynamic Analysis of Combined Cycle Power Plant

    Directory of Open Access Journals (Sweden)

    A.K.Tiwari,

    2010-04-01

    Full Text Available Air Bottoming Cycle (ABC can replace the heat recovery steam generator and the steam turbine of the conventionalcombined cycle plant. The exhaust energy of the topping gas turbine of existing combine cycle is sent to gas-air heat exchange, which heats the air in the secondary gas turbine cycle. In 1980’s the ABC was proposed as an alternative for the conventional steam bottoming cycle. In spite of the cost of reducing hardware installations it could achieve a thermal efficiency of 80%. The complete thermodynamic analysis of the system has been performed by using specially designed programme, enabling the variation of main independent variables. The result shows the gain in net work output as well as efficiency of combined cycle is 35% to 68%.

  11. Partial volume correction of the microPET blood input function using ensemble learning independent component analysis

    International Nuclear Information System (INIS)

    Medical images usually suffer from a partial volume effect (PVE), which may degrade the accuracy of any quantitative information extracted from the images. Our aim was to recreate accurate radioactivity concentration and time-activity curves (TACs) by microPET R4 quantification using ensemble learning independent component analysis (EL-ICA). We designed a digital cardiac phantom for this simulation and in order to evaluate the ability of EL-ICA to correct the PVE, the simulated images were convoluted using a Gaussian function (FWHM = 1-4 mm). The robustness of the proposed method towards noise was investigated by adding statistical noise (SNR = 2-16). During further evaluation, another set of cardiac phantoms were generated from the reconstructed images, and Poisson noise at different levels was added to the sinogram. In real experiments, four rat microPET images and a number of arterial blood samples were obtained; these were used to estimate the metabolic rate of FDG (MRFDG). Input functions estimated using the FastICA method were used for comparison. The results showed that EL-ICA could correct PVE in both the simulated and real cases. After correcting for the PVE, the errors for MRFDG, when estimated by the EL-ICA method, were smaller than those when TACs were directly derived from the PET images and when the FastICA approach was used.

  12. Analysis of the efficiency and potential collapse of the ensemble Kalman filter for marginal and joint posteriors

    CERN Document Server

    Morzfeld, Matthias

    2015-01-01

    In data assimilation one updates the state of a numerical model with information from sparse and noisy observations of the model's state. A popular approach to data assimilation in geophysical applications is the ensemble Kalman filter (EnKF). An alternative approach is particle filtering and, recently, much theoretical work has been done to understand the abilities and limitations of particle filters. Here we extend this work to EnKF. First we explain that EnKF and particle filters solve different problems: the EnKF approximates a specific marginal of the joint posterior of particle filters. We then perform a linear analysis of the EnKF as a sequential sampling algorithm for the joint posterior (i.e. as a particle filter), and show that the EnKF collapses on this problem in the exact same way and under similar conditions as particle filters. However, it is critical to realize that the collapse of the EnKF on the joint posterior does not imply its collapse on the marginal posterior. This raises the question, ...

  13. PHARMACOECONOMIC ANALYSIS OF ANTIHYPERTENSIVE DRUG COMBINATIONS USE

    Directory of Open Access Journals (Sweden)

    E. I. Tarlovskaya

    2015-09-01

    Full Text Available Aim. To pursue pharmacoeconomic analysis of two drug combinations of ACE inhibitor (enalapril and diuretic.Material and methods. Patients with arterial hypertension degree 2 and diabetes mellitus type 2 without ischemic heart disease (n=56 were included into the study. Blood pressure (BP dynamics and cost/effectiveness ratio were evaluated.Results. In group A (fixed combination of original enalapril/hydrochlorothiazide 61% of patients achieved target BP level with initial dose, and the rest 39% of patients – with double dose. In group B (non-fixed combination of generic enalapril/indapamide 60% of patients achieved the target BP with initial dose of drugs, 33% - with double dose of ACE inhibitor, and 7% - with additional amlodipine administration. In patients of group A systolic BP (SBP reduction was 45.82±1.23 mm Hg by the 12th week vs. 40.0±0.81 mm Hg in patients of group B; diastolic BP (DBP reduction was 22.47±1.05 mm Hg and 18.76±0.70 mm Hg, respectively, by the 12th week of treatment. In the first month of treatment costs of target BP achievement was 298.62 rubles per patient in group A, and 299.50 rubles – in group B; by the 12th week of treatment – 629.45 and 631.22 rubles, respectively. Costs of SBP and DBP reduction by 1 mm Hg during 12 weeks of therapy were 13 and 27 rubles per patient, respectively, in group A, and 16 and 34 rubles per patient, respectively, in group B.Conclusion. The original fixed combination (enalapril+hydrochlorothiazide proved to be more clinically effective and more cost effective in the treatment of hypertensive patients in comparison with the non-fixed combination of generic drugs (enalapril+indapamide.

  14. Supervised Ensemble Classification of Kepler Variable Stars

    CERN Document Server

    Bass, Gideon

    2016-01-01

    Variable star analysis and classification is an important task in the understanding of stellar features and processes. While historically classifications have been done manually by highly skilled experts, the recent and rapid expansion in the quantity and quality of data has demanded new techniques, most notably automatic classification through supervised machine learning. We present an expansion of existing work on the field by analyzing variable stars in the {\\em Kepler} field using an ensemble approach, combining multiple characterization and classification techniques to produce improved classification rates. Classifications for each of the roughly 150,000 stars observed by {\\em Kepler} are produced separating the stars into one of 14 variable star classes.

  15. A Gaussian mixture ensemble transform filter

    OpenAIRE

    Reich, Sebastian

    2011-01-01

    We generalize the popular ensemble Kalman filter to an ensemble transform filter where the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions, ...

  16. The Ensembl Variant Effect Predictor.

    Science.gov (United States)

    McLaren, William; Gil, Laurent; Hunt, Sarah E; Riat, Harpreet Singh; Ritchie, Graham R S; Thormann, Anja; Flicek, Paul; Cunningham, Fiona

    2016-01-01

    The Ensembl Variant Effect Predictor is a powerful toolset for the analysis, annotation, and prioritization of genomic variants in coding and non-coding regions. It provides access to an extensive collection of genomic annotation, with a variety of interfaces to suit different requirements, and simple options for configuring and extending analysis. It is open source, free to use, and supports full reproducibility of results. The Ensembl Variant Effect Predictor can simplify and accelerate variant interpretation in a wide range of study designs. PMID:27268795

  17. Ensembles lexicaux

    DEFF Research Database (Denmark)

    Laursen, Bo

    1998-01-01

    In this article the author proposes a solution to the classical problem in European lexical semantics of delimiting lexical fields, a problem that most field-oriented semanticists involved in practical lexico-semantic analysis have found themselves confronted with. What are the criteria for sayin...

  18. Using Analysis State to Construct a Forecast Error Covariance Matrix in Ensemble Kalman Filter Assimilation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaogu; WU Guocan; ZHANG Shupeng; LIANG Xiao; DAI Yongjiu; LI Yong

    2013-01-01

    Correctly estimating the forecast error covariance matrix is a key step in any data assimilation scheme.If it is not correctly estimated,the assimilated states could be far from the true states.A popular method to address this problem is error covariance matrix inflation.That is,to multiply the forecast error covariance matrix by an appropriate factor.In this paper,analysis states are used to construct the forecast error covariance matrix and an adaptive estimation procedure associated with the error covariance matrix inflation technique is developed.The proposed assimilation scheme was tested on the Lorenz-96 model and 2D Shallow Water Equation model,both of which are associated with spatially correlated observational systems.The experiments showed that by introducing the proposed structure of the forecast error covariance matrix and applying its adaptive estimation procedure,the assimilation results were further improved.

  19. Analysis and modeling of ensemble recordings from respiratory pre-motor neurons indicate changes in functional network architecture after acute hypoxia

    Directory of Open Access Journals (Sweden)

    Roberto F Galán

    2010-09-01

    Full Text Available We have combined neurophysiologic recording, statistical analysis, and computational modeling to investigate the dynamics of the respiratory network in the brainstem. Using a multielectrode array, we recorded ensembles of respiratory neurons in perfused in situ rat preparations that produce spontaneous breathing patterns, focusing on inspiratory pre-motor neurons. We compared firing rates and neuronal synchronization among these neurons before and after a brief hypoxic stimulus. We observed a significant decrease in the number of spikes after stimulation, in part due to a transient slowing of the respiratory pattern. However, the median interspike interval did not change, suggesting that the firing threshold of the neurons was not affected but rather the synaptic input was. A bootstrap analysis of synchrony between spike trains revealed that, both before and after brief hypoxia, up to 45 % (but typically less than 5 % of coincident spikes across neuronal pairs was not explained by chance. Most likely, this synchrony resulted from common synaptic input to the pre-motor population, an example of stochastic synchronization. After brief hypoxia most pairs were less synchronized, although some were more, suggesting that the respiratory network was “rewired” transiently after the stimulus. To investigate this hypothesis, we created a simple computational model with feed-forward divergent connections along the inspiratory pathway. Assuming that 1 the number of divergent projections was not the same for all presynaptic cells, but rather spanned a wide range and 2 that the stimulus increased inhibition at the top of the network; this model reproduced the reduction in firing rate and bootstrap-corrected synchrony subsequent to hypoxic stimulation observed in our experimental data.

  20. Multinomial logistic regression ensembles.

    Science.gov (United States)

    Lee, Kyewon; Ahn, Hongshik; Moon, Hojin; Kodell, Ralph L; Chen, James J

    2013-05-01

    This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection. By combining multiple models the proposed method can handle a huge database without a constraint needed for analyzing high-dimensional data, and the random partition can improve the prediction accuracy by reducing the correlation among base classifiers. The proposed method is implemented using R, and the performance including overall prediction accuracy, sensitivity, and specificity for each category is evaluated on two real data sets and simulation data sets. To investigate the quality of prediction in terms of sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve (AUC) is also examined. The performance of the proposed model is compared to a single multinomial logit model and it shows a substantial improvement in overall prediction accuracy. The proposed method is also compared with other classification methods such as the random forest, support vector machines, and random multinomial logit model. PMID:23611203

  1. Evaluation of an ensemble-based incremental variational data assimilation

    OpenAIRE

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2014-01-01

    In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). In the same way as variational assimilation schemes, it is formulated as the minimization of an objective function, but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance and works in an off-line smoothing...

  2. Geophysical inversion with a neighbourhood algorithm-II. Appraising the ensemble

    Science.gov (United States)

    Sambridge, Malcolm

    1999-09-01

    Monte Carlo direct search methods, such as genetic algorithms, simulated annealing, etc., are often used to explore a finite-dimensional parameter space. They require the solving of the forward problem many times, that is, making predictions of observables from an earth model. The resulting ensemble of earth models represents all `information' collected in the search process. Search techniques have been the subject of much study in geophysics; less attention is given to the appraisal of the ensemble. Often inferences are based on only a small subset of the ensemble, and sometimes a single member. This paper presents a new approach to the appraisal problem. To our knowledge this is the first time the general case has been addressed, that is, how to infer information from a complete ensemble, previously generated by any search method. The essence of the new approach is to use the information in the available ensemble to guide a resampling of the parameter space. This requires no further solving of the forward problem, but from the new `resampled' ensemble we are able to obtain measures of resolution and trade-off in the model parameters, or any combinations of them. The new ensemble inference algorithm is illustrated on a highly non-linear wave-form inversion problem. It is shown how the computation time and memory requirements scale with the dimension of the parameter space and size of the ensemble. The method is highly parallel, and may easily be distributed across several computers. Since little is assumed about the initial ensemble of earth models, the technique is applicable to a wide variety of situations. For example, it may be applied to perform `error analysis' using the ensemble generated by a genetic algorithm, or any other direct search method.

  3. Analysis of extreme climatic features over South America from CLARIS-LPB ensemble of regional climate models for future conditions

    Science.gov (United States)

    Sanchez, E.; Zaninelli, P.; Carril, A.; Menendez, C.; Dominguez, M.

    2012-04-01

    An ensemble of seven regional climate models (RCM) included in the European CLARIS-LPB project (A Europe-South America Network for Climate Change Assessment and Impact Studies in La Plata Basin) are used to study how some features related to climatic extremes are projected to be changed by the end of XXIst century. These RCMs are forced by different IPCC-AR4 global climate models (IPSL, ECHAM5 and HadCM3), covering three different 30-year periods: present (1960-1990), near future (2010-2040) and distant future (2070-2100), with 50km of horizontal resolution. These regional climate models have previously been forced with ERA-Interim reanalysis, in a consistent procedure with CORDEX (A COordinated Regional climate Downscaling EXperiment) initiative for the South-America domain. The analysis shows a good agreement among them and the available observational databases to describe the main features of the mean climate of the continent. Here we focus our analysis on some topics of interest related to extreme events, such as the development of diagnostics related to dry-spells length, the structure of the frequency distribution functions over several subregions defined by more or less homogeneous climatic conditions (four sub-basins over the La Plata Basin, the southern part of the Amazon basin, Northeast Brazil, and the South Atlantic Convergence Zone (SACZ)), the structure of the annual cycle and their main features and relation with the length of the seasons, or the frequency of anomalous hot or cold events. One shortcoming that must be considered is the lack of observational databases with both time and spatial frequency to validate model outputs. At the same time, one challenging issue of this study is the regional modelling description of a continent where a huge variety of climates are present, from desert to mountain conditions, and from tropical to subtropical regimes. Another basic objective of this preliminary work is also to obtain a measure of the spread among

  4. Nonlinear stability and ergodicity of ensemble based Kalman filters

    Science.gov (United States)

    Tong, Xin T.; Majda, Andrew J.; Kelly, David

    2016-02-01

    The ensemble Kalman filter (EnKF) and ensemble square root filter (ESRF) are data assimilation methods used to combine high dimensional, nonlinear dynamical models with observed data. Despite their widespread usage in climate science and oil reservoir simulation, very little is known about the long-time behavior of these methods and why they are effective when applied with modest ensemble sizes in large dimensional turbulent dynamical systems. By following the basic principles of energy dissipation and controllability of filters, this paper establishes a simple, systematic and rigorous framework for the nonlinear analysis of EnKF and ESRF with arbitrary ensemble size, focusing on the dynamical properties of boundedness and geometric ergodicity. The time uniform boundedness guarantees that the filter estimate will not diverge to machine infinity in finite time, which is a potential threat for EnKF and ESQF known as the catastrophic filter divergence. Geometric ergodicity ensures in addition that the filter has a unique invariant measure and that initialization errors will dissipate exponentially in time. We establish these results by introducing a natural notion of observable energy dissipation. The time uniform bound is achieved through a simple Lyapunov function argument, this result applies to systems with complete observations and strong kinetic energy dissipation, but also to concrete examples with incomplete observations. With the Lyapunov function argument established, the geometric ergodicity is obtained by verifying the controllability of the filter processes; in particular, such analysis for ESQF relies on a careful multivariate perturbation analysis of the covariance eigen-structure.

  5. A Selective Fuzzy Clustering Ensemble Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Li

    2013-12-01

    Full Text Available To improve the performance of clustering ensemble method, a selective fuzzy clustering ensemble algorithm is proposed. It mainly includes selection of clustering ensemble members and combination of clustering results. In the process of member selection, measure method is defined to select the better clustering members. Then some selected clustering members are viewed as hyper-graph in order to select the more influential hyper-edges (or features and to weight the selected features. For processing hyper-edges with fuzzy membership, CSPA and MCLA consensus function are generalized. In the experiments, some UCI data sets are chosen to test the presented algorithm’s performance. From the experimental results, it can be seen that the proposed ensemble method can get better clustering ensemble result.

  6. Generation of scenarios from calibrated ensemble forecasts with a dynamic ensemble copula coupling approach

    DEFF Research Database (Denmark)

    Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.;

    2015-01-01

    is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error....... The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (d-ECC). The ensemble based empirical copulas, ECC and d-ECC, are applied to wind forecasts from the high resolution ensemble system COSMO-DEEPS run operationally at the German...

  7. 基于多特征集成分类器的脱机满文识别方法%Off-line Manchu character recognition based on multi-classifier ensemble with combination features

    Institute of Scientific and Technical Information of China (English)

    魏巍; 郭晨

    2012-01-01

    为提高脱机满文手写字体的识别率,提出了基于BP网络的多特征集成分类器识别方法.对扫描成图像的手写满文进行预处理,切分出满文字元;分别提取满文字元的投影特征、链码特征以及端点和交叉点特征,并对这三类特征及其相互组合进行分类识别;通过隐马尔科夫算法对识别结果进行后处理,进一步提高识别的精度.实验结果表明,集成分类器的识别率要比单个特征的识别率要高,同时集成分类器中的特征类别越多,识别效果越好.%To improve the off-line Manchu handwritten character recognition rate, a method of recognition based on the multi-classifier of back propagation neural network ensemble with combination features is presented. Firstly, the preprocessing is performed to segment the Manchu character units aiming at Manchu character image. Secondly, it is implemented to recognize the projection feature, chain code one and begin and end point and cross point one of Manchu character unit and the combination features of these ones. Finally, the post processing of Manchu character recognition result is done by the method of hidden Markov model and the recognition rate further is improved. The result of the experiment shows that the recognition rate of the multi-classifier ensemble is higher than the single one and the more features, the better in the multi-classifier ensemble.

  8. Real-time Reservoir Operation Based on a Combination of Long-term and Short-term Optimization and Hydrological Ensemble Forecasts

    Science.gov (United States)

    Meier, P.; Tilmant, A.; Boucher, M.; Anctil, F.

    2012-12-01

    In a reservoir system, benefits are usually increased if the system is operated in a coordinated manner. However, despite ever increasing computational power available to users, the optimization of a large system of reservoirs and hydropower stations remains a challenge, especially if uncertainties are included. When applying optimization methods, such as stochastic dynamic programming, the size of a problem becomes quickly too large to be solved. This situation is also known as the curse of dimensionality which limits the applicability of SDP to systems involving only two to three reservoirs. The fact that by design most reservoirs serve multiple purposes adds another difficulty when the operation is to be optimized. A method which is able to address the optimization of multi-purpose reservoirs even in large systems is stochastic dual dynamic programming (SDDP). This approximative dynamic programming technique represents the future benefit function with a number of hyperplanes. The SDDP model developed in this study maximizes the expected net benefits associated with the operation of a reservoir system on a midterm horizon (several years, monthly time step). SDDP provides, at each time step, estimates of the marginal water value stored in each reservoir. Reservoir operators, however, are interested in day-to-day decisions. To provide an operational optimization framework tailored for short-term decision support, the SDDP optimization can be coupled with a short-term nonlinear programming optimization using hydrological ensemble forecasts. The short-term objective therefore consists of the total electricity production within the forecast horizon and the total value of water stored in all the reservoirs. Thus, maximizing this objective ensures that a short-term decision does not contradict the strategic planning. This optimization framework is implemented for the Gatineau river basin, a sub-basin of the Ottawa river north of the city of Ottawa. The Gatineau river

  9. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods. PMID:25751882

  10. Four-dimensional Localization and the Iterative Ensemble Kalman Smoother

    Science.gov (United States)

    Bocquet, M.

    2015-12-01

    The iterative ensemble Kalman smoother (IEnKS) is a data assimilation method meant for efficiently tracking the state ofnonlinear geophysical models. It combines an ensemble of model states to estimate the errors similarly to the ensemblesquare root Kalman filter, with a 4D-variational analysis performed within the ensemble space. As such it belongs tothe class of ensemble variational methods. Recently introduced 4DEnVar or the 4D-LETKF can be seen as particular casesof the scheme. The IEnKS was shown to outperform 4D-Var, the ensemble Kalman filter (EnKF) and smoother, with low-ordermodels in all investigated dynamical regimes. Like any ensemble method, it could require the use of localization of theanalysis when the state space dimension is high. However, localization for the IEnKS is not as straightforward as forthe EnKF. Indeed, localization needs to be defined across time, and it needs to be as much as possible consistent withthe dynamical flow within the data assimilation variational window. We show that a Liouville equation governs the timeevolution of the localization operator, which is linked to the evolution of the error correlations. It is argued thatits time integration strongly depends on the forecast dynamics. Using either covariance localization or domainlocalization, we propose and test several localization strategies meant to address the issue: (i) a constant and uniformlocalization, (ii) the propagation through the window of a restricted set of dominant modes of the error covariancematrix, (iii) the approximate propagation of the localization operator using model covariant local domains. Theseschemes are illustrated on the one-dimensional Lorenz 40-variable model.

  11. Probability-weighted ensembles of U.S. county-level climate projections for climate risk analysis

    CERN Document Server

    Rasmussen, D J; Kopp, Robert E

    2015-01-01

    Quantitative assessment of climate change risk requires a method for constructing probabilistic time series of changes in physical climate parameters. Here, we develop two such methods, Surrogate/Model Mixed Ensemble (SMME) and Monte Carlo Pattern/Residual (MCPR), and apply them to construct joint probability density functions (PDFs) of temperature and precipitation change over the 21st century for every county in the United States. Both methods produce $likely$ (67% probability) temperature and precipitation projections consistent with the Intergovernmental Panel on Climate Change's interpretation of an equal-weighted Coupled Model Intercomparison Project 5 (CMIP5) ensemble, but also provide full PDFs that include tail estimates. For example, both methods indicate that, under representative concentration pathway (RCP) 8.5, there is a 5% chance that the contiguous United States could warm by at least 8$^\\circ$C. Variance decomposition of SMME and MCPR projections indicate that background variability dominates...

  12. A statistical analysis of three ensembles of crop model responses totemperature and CO2concentration

    DEFF Research Database (Denmark)

    Makowski, D; Asseng, S; Ewert, F.; Bassu, S; Durand, J.L.; Li, T; Martre, P; Adam, M.; Aggarwal, P K; Angulo, C; Baron, C; Basso, B; Bertuzzi, P; Biernath, C; Boogaard, H; Boote, K J; Bouman, B; Bregaglio, S; Brisson, N; Buis, S; Cammarano, D; Challinor, A J; Confalonieri, R; Conijn, J G; Corbeels, M; Deryng, D; De Sanctis, G; Doltra, J; Fumoto, T; Gaydon, D; Gayler, S; Goldberg, R; Grant, R F; Grassini, P; Hatfield, J L; Hasegawa, T; Heng, L; Hoek, S; Hooker, J; Hunt, L A; Ingwersen, J; Izaurralde, R C; Jongschaap, R E E; Jones, J W; Kemanian, R A; Kersebaum, K C; Kim, S.-H.; Lizaso, J; Marcaida Ill, M; Müller, C; Nakagawa, H; Naresh Kumar, S; Nendel, C; O'Leary, G J; Olesen, Jørgen Eivind; Oriol, P; Osborne, T M; Palosuo, T; Pravia, M V; Priesack, E; Ripoche, D; Rosenzweig, C; Ruane, A C; Ruget, F; Sau, F; Semenov, M A; Shcherbak, I; Singh, B; Singh, U; Soo, H K; Steduto, P; Stöckle, C; Stratonovitch, P; Streck, T; Supit, I; Tang, L.; Tao, F; Teixeira, E I; Thorburn, P; Timlin, D; Travasso, M; Rötter, R P; Waha, K; Wallach, D; White, J W; Wilkens, P; Williams, J R; Wolf, J.; Yin, X; Yoshida, H; Zhang, Z; Zhu, Y

    concentration levels, and can thus be used to calculate temperature and [CO2] thresholds leading to yield loss or yield gain, without re-running the original complex crop models. Our approach is illustrated with three yield datasets simulated by 19 maize models, 26 wheat models, and 13 rice models. Several......Ensembles of process-based crop models are increasingly used to simulate crop growth for scenarios of temperature and/or precipitation changes corresponding to different projections of atmospheric CO2 concentrations. This approach generates large datasets with thousands of simulated crop yield data...... the simulation protocols. Here we demonstrate that statistical models based on random-coefficient regressions are able to emulate ensembles of process-based crop models. An important advantage of the proposed statistical models is that they can interpolate between temperature levels and between CO2...

  13. Online Learning with Ensembles

    OpenAIRE

    Urbanczik, R

    1999-01-01

    Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule $f$ a transform $\\tilde{f}$ exists,...

  14. Rydberg ensemble based CNOTN gates using STIRAP

    Science.gov (United States)

    Gujarati, Tanvi; Duan, Luming

    2016-05-01

    Schemes for implementation of CNOT gates in atomic ensembles are important for realization of quantum computing. We present here a theoretical scheme of a CNOTN gate with an ensemble of three-level atoms in the lambda configuration and a single two-level control atom. We work in the regime of Rydberg blockade for the ensemble atoms due to excitation of the Rydberg control atom. It is shown that using STIRAP, atoms from one ground state of the ensemble can be adiabatically transferred to the other ground state, depending on the state of the control atom. A thorough analysis of adiabatic conditions for this scheme and the influence of the radiative decay is provided. We show that the CNOTN process is immune to the decay rate of the excited level in ensemble atoms. This work is supported by the ARL, the IARPA LogiQ program, and the AFOSR MURI program.

  15. Reliability analysis of rc containment structures under combined loads

    International Nuclear Information System (INIS)

    This paper discusses a reliability analysis method and load combination design criteria for reinforced concrete containment structures under combined loads. The probability based reliability analysis method is briefly described. For load combination design criteria, derivations of the load factors for accidental pressure due to a design basis accident and safe shutdown earthquake (SSE) for three target limit state probabilities are presented

  16. Ensemble methods for noise in classification problems

    OpenAIRE

    Verbaeten, Sofie; Van Assche, Anneleen

    2003-01-01

    Ensemble methods combine a set of classifiers to construct a new classifier that is (often) more accurate than any of its component classifiers. In this paper, we use ensemble methods to identify noisy training examples. More precisely, we consider the problem of mislabeled training examples in classification tasks, and address this problem by pre-processing the training set, i.e. by identifying and removing outliers from the training set. We study a number of filter techniques that are based...

  17. Enhanced ensemble-based 4DVar scheme for data assimilation

    OpenAIRE

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2015-01-01

    International audience Ensemble based optimal control schemes combine the components of ensemble Kalman filters and variational data assimilation (4DVar). They are trendy because they are easier to implement than 4DVar. In this paper, we evaluate a modified version of an ensemble based optimal control strategy for image data assimilation. This modified method is assessed with a Shallow Water model combined with synthetic data and original incomplete experimental depth sensor observations. ...

  18. Control Flow Analysis for SF Combinator Calculus

    OpenAIRE

    Lester, Martin

    2015-01-01

    Programs that transform other programs often require access to the internal structure of the program to be transformed. This is at odds with the usual extensional view of functional programming, as embodied by the lambda calculus and SK combinator calculus. The recently-developed SF combinator calculus offers an alternative, intensional model of computation that may serve as a foundation for developing principled languages in which to express intensional computation, including program transfo...

  19. Multilevel ensemble Kalman filtering

    OpenAIRE

    Hoel, Håkon; Law, Kody J. H.; Tempone, Raul

    2015-01-01

    This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (ENKF), thereby yielding a multilevel ensemble Kalman filter (MLENKF) which has provably superior asymptotic cost to a given accuracy level. The theoretical results are illustrated numerically.

  20. Towards Intelligent Ensembles

    Czech Academy of Sciences Publication Activity Database

    Bureš, Tomáš; Krijt, F.; Plášil, F.; Hnětynka, P.; Jiráček, Z.

    New York,: ACM, 2015, Article No. 17. ISBN 978-1-4503-3393-1. [ECSAW '15. European Conference on Software Architecture Workshops. Dubrovnik (HR), 07.09.2015-08.09.2015] Institutional support: RVO:67985807 Keywords : distributed coordination * architectural adaptation * ensemble-based component systems * component model * emergent architecture * component ensembles * autonomic systems Subject RIV: JC - Computer Hardware ; Software

  1. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed

    2011-07-01

    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  2. The architectural ensemble of the historic center in Fortaleza-CE-Brazil: statistical analysis of use and meaning

    Directory of Open Access Journals (Sweden)

    Antonio Gilberto Abreu de Souza

    2012-11-01

    Full Text Available Numerous documents, letters and recommendations of UNESCO discuss the importance of community in the process of revitalization, protection and preservation of architectural ensembles, especially when located in urban areas. The conservation of a particular area become successful when the structural, social, economic and cultural factors are identified, discussed and the solutions applied. In that sense, this article is the result of a research whose object, the Historic Center of Fortaleza-CE-Brazil, was evaluated from questionnaire applied to its residents, workers and users of services in this area aimed at the diagnosis on the value of historic, artistic and architectural representative 19TH century at the region.

  3. Finite-size scaling analysis of localization transition for scalar waves in a 3D ensemble of resonant point scatterers

    CERN Document Server

    Skipetrov, S E

    2016-01-01

    We use the random Green's matrix model to study the scaling properties of the localization transition for scalar waves in a three-dimensional (3D) ensemble of resonant point scatterers. We show that the probability density $p(g)$ of normalized decay rates of quasi-modes $g$ is very broad at the transition and in the localized regime and that it does not obey a single-parameter scaling law. The latter holds, however, for the small-$g$ part of $p(g)$ which we exploit to estimate the critical exponent $\

  4. Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique

    Directory of Open Access Journals (Sweden)

    Jan D. Keller

    2008-12-01

    Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.

  5. Unified analysis of ensemble and single-complex optical spectral data from light-harvesting complex-2 chromoproteins for gaining deeper insight into bacterial photosynthesis

    Science.gov (United States)

    Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi

    2015-11-01

    Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C2-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.

  6. The Local Ensemble Transform Kalman Filter (LETKF) with a Global NWP Model on the Cubed Sphere

    Science.gov (United States)

    Shin, Seoleun; Kang, Ji-Sun; Jo, Youngsoon

    2016-04-01

    We develop an ensemble data assimilation system using the four-dimensional local ensemble transform kalman filter (LEKTF) for a global hydrostatic numerical weather prediction (NWP) model formulated on the cubed sphere. Forecast-analysis cycles run stably and thus provide newly updated initial states for the model to produce ensemble forecasts every 6 h. Performance of LETKF implemented to the global NWP model is verified using the ECMWF reanalysis data and conventional observations. Global mean values of bias and root mean square difference are significantly reduced by the data assimilation. Besides, statistics of forecast and analysis converge well as the forecast-analysis cycles are repeated. These results suggest that the combined system of LETKF and the global NWP formulated on the cubed sphere shows a promising performance for operational uses.

  7. The Local Ensemble Transform Kalman Filter (LETKF) with a Global NWP Model on the Cubed Sphere

    Science.gov (United States)

    Shin, Seoleun; Kang, Ji-Sun; Jo, Youngsoon

    2016-07-01

    We develop an ensemble data assimilation system using the four-dimensional local ensemble transform kalman filter (LEKTF) for a global hydrostatic numerical weather prediction (NWP) model formulated on the cubed sphere. Forecast-analysis cycles run stably and thus provide newly updated initial states for the model to produce ensemble forecasts every 6 h. Performance of LETKF implemented to the global NWP model is verified using the ECMWF reanalysis data and conventional observations. Global mean values of bias and root mean square difference are significantly reduced by the data assimilation. Besides, statistics of forecast and analysis converge well as the forecast-analysis cycles are repeated. These results suggest that the combined system of LETKF and the global NWP formulated on the cubed sphere shows a promising performance for operational uses.

  8. Spectral diagonal ensemble Kalman filters

    CERN Document Server

    Kasanický, Ivan; Vejmelka, Martin

    2015-01-01

    A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.

  9. Spectral diagonal ensemble Kalman filters

    Directory of Open Access Journals (Sweden)

    I. Kasanický

    2015-01-01

    Full Text Available A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT and discrete wavelet transform (DWT are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.

  10. Analysis of combining ability in soybean cultivars

    OpenAIRE

    Dilermando Perecin; Antonio Orlando Di Mauro; Eduardo Antonio Gavioli

    2006-01-01

    Eight soybean cultivars (Doko, Bossier, Ocepar-4, BR-15, FT-Cometa, Savana, Paraná and Cristalina) werecrossed in a diallel design. Plants of the F1 generation and their parents were evaluated under short-day conditions for thedetermination of the general (GCA) and specific (SCA) combining ability. The estimated GCA and SCA values were significantfor the evaluated traits except for the “total cycle”. Highest GCA effects for the traits “days to flowering”, “plant height”,“insertion height”, “n...

  11. Setup Analysis: Combining SMED with Other Tools

    Directory of Open Access Journals (Sweden)

    Stadnicka Dorota

    2015-02-01

    Full Text Available The purpose of this paper is to propose the methodology for the setup analysis, which can be implemented mainly in small and medium enterprises which are not convinced to implement the setups development. The methodology was developed after the research which determined the problem. Companies still have difficulties with a long setup time. Many of them do nothing to decrease this time. A long setup is not a sufficient reason for companies to undertake any actions towards the setup time reduction. To encourage companies to implement SMED it is essential to make some analyses of changeovers in order to discover problems. The methodology proposed can really encourage the management to take a decision about the SMED implementation, and that was verified in a production company. The setup analysis methodology is made up of seven steps. Four of them concern a setups analysis in a chosen area of a company, such as a work stand which is a bottleneck with many setups. The goal is to convince the management to begin actions concerning the setups improvement. The last three steps are related to a certain setup and, there, the goal is to reduce a setup time and the risk of problems which can appear during the setup. In this paper, the tools such as SMED, Pareto analysis, statistical analysis, FMEA and other were used.

  12. Controlling balance in an ensemble Kalman filter

    OpenAIRE

    G. A. Gottwald

    2014-01-01

    We present a method to control unbalanced fast dynamics in an ensemble Kalman filter by introducing a weak constraint on the imbalance in a spatially sparse observational network. We show that the balance constraint produces significantly more balanced analyses than ensemble Kalman filters without balance constraints and than filters implementing incremental analysis updates (IAU). Furthermore, our filter with the weak constraint on imbalance produces good rms error statisti...

  13. Development and testing of the GRAPES regional ensemble-3DVAR hybrid data assimilation system

    Science.gov (United States)

    Chen, Lianglü; Chen, Jing; Xue, Jishan; Xia, Yu

    2015-12-01

    Based on the GRAPES (Global/Regional Assimilation and Prediction System) regional ensemble prediction system and 3DVAR (three-dimensional variational) data assimilation system, which are implemented operationally at the Numerical Weather Prediction Center of the China Meteorological Administration, an ensemble-based 3DVAR (En-3DVAR) hybrid data assimilation system for GRAPES_Meso (the regional mesoscale numerical prediction system of GRAPES) was developed by using the extended control variable technique to implement a hybrid background error covariance that combines the climatological covariance and ensemble-estimated covariance. Considering the problems of the ensemble-based data assimilation part of the system, including the reduction in the degree of geostrophic balance between variables, and the non-smooth analysis increment and its obviously smaller size compared with the 3DVAR data assimilation, corresponding measures were taken to optimize and ameliorate the system. Accordingly, a single pressure observation ensemble-based data assimilation experiment was conducted to ensure that the ensemble-based data assimilation part of the system is correct and reasonable. A number of localization-scale sensitivity tests of the ensemble-based data assimilation were also conducted to determine the most appropriate localization scale. Then, a number of hybrid data assimilation experiments were carried out. The results showed that it was most appropriate to set the weight factor of the ensemble-estimated covariance in the experiments to be 0.8. Compared with the 3DVAR data assimilation, the geopotential height forecast of the hybrid data assimilation experiments improved very little, but the wind forecast improved slightly at each forecast time, especially over 300 hPa. Overall, the hybrid data assimilation demonstrates some advantages over the 3DVAR data assimilation.

  14. Analysis of combining ability in soybean cultivars

    Directory of Open Access Journals (Sweden)

    Dilermando Perecin

    2006-01-01

    Full Text Available Eight soybean cultivars (Doko, Bossier, Ocepar-4, BR-15, FT-Cometa, Savana, Paraná and Cristalina werecrossed in a diallel design. Plants of the F1 generation and their parents were evaluated under short-day conditions for thedetermination of the general (GCA and specific (SCA combining ability. The estimated GCA and SCA values were significantfor the evaluated traits except for the “total cycle”. Highest GCA effects for the traits “days to flowering”, “plant height”,“insertion height”, “number of branches” and “total cycle” were estimated for the cultivars Doko, Cristalina and Savana.The variability observed in the trait “days to flowering” can, for the most part, be explained by additive effects.

  15. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  16. Ensembles on Random Patches

    OpenAIRE

    Louppe, Gilles; Geurts, Pierre

    2012-01-01

    In this paper, we consider supervised learning under the assumption that the available memory is small compared to the dataset size. This general framework is relevant in the context of big data, distributed databases and embedded systems. We investigate a very simple, yet effective, ensemble framework that builds each individual model of the ensemble from a random patch of data obtained by drawing random subsets of both instances and features from the whole dataset. We carry out an extensive...

  17. A hyper-ensemble forecast of surface drift

    Science.gov (United States)

    Vandenbulcke, L.; Lenartz, F.; Poulain, P. M.; Rixen, M.; DART Consortium; MREA Consortium

    2009-04-01

    The prediction of surface drift of water is an important task, with applications such as marine transport, pollutant dispersion, and search-and-rescue activities. However, it is also very challenging, because it depends on ocean models that (usually) do not completely accurately represent wind-induced current, that do not include wave-driven currents, etc. However, the real surface drift depends on all present physical phenomena, which moreover interact in complex ways. Furthermore, although each of these factors can be forecasted by deterministic models, the latter all suffer from limitations, resulting in imperfect predictions. In the present study, we try and predict the drift of buoys launched during the DART06 (Dynamics of the Adriatic sea in Real-Time 2006) and MREA07 (Maritime Rapid Environmental Assessment 2007) sea trials, using the so-called hyper-ensemble technique: different models are combined in order to minimize departure from independent observations during a training period. The obtained combination is then used in forecasting mode. We review and try out different hyper-ensemble techniques, such as the simple ensemble mean, least-squares weighted linear combinations, and techniques based on data assimilation, which dynamically update the model's weights in the combination when new observations become available. We show that the latter methods alleviate the need of a priori fixing the training length. When the forecast period is relatively short, the discussed methods lead to much smaller forecasting errors compared with individual models (at least 3 times smaller), with the dynamic methods leading to the best results. When many models are available, errors can be further reduced by removing colinearities between them by performing a principal component analysis. At the same time, this reduces the amount of weights to be determined. In complex environments, the skill of individual models may vary over time periods smaller than the desired

  18. Combination of structural reliability and interval analysis

    Institute of Scientific and Technical Information of China (English)

    Zhiping Qiu; Di Yang; saac Elishakoff

    2008-01-01

    In engineering applications,probabilistic reliability theory appears to be presently the most important method,however,in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs.In this paper,we developed a hybrid of probabilistic and non-probabilistic reliability theory,which describes the structural uncertain parameters as interval variables when statistical data are found insufficient.By using the interval analysis,a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper,and the traditional probabilistic theory is incorporated with the interval analysis.Moreover,the new method preserves the useful part of the traditional probabilistic reliability theory,but removes the restriction of its strict requirement on data acquisition.Example is presented to demonstrate the feasibility and validity of the proposed theory.

  19. Combining risk analysis and security testing

    OpenAIRE

    Großmann, Jürgen; SCHNEIDER, Martin; Viehmann, Johannes; Wendland, Marc-Florian

    2014-01-01

    A systematic integration of risk analysis and security testing allows for optimizing the test process as well as the risk assessment itself. The result of the risk assessment, i.e. the identified vulnerabilities, threat scenarios and unwanted incidents, can be used to guide the test identification and may complement requirements engineering results with systematic information concerning the threats and vulnerabilities of a system and their probabilities and consequences. This information can ...

  20. Data assimilation with the weighted ensemble Kalman filter

    OpenAIRE

    Papadakis, Nicolas; Mémin, Etienne; Cuzol, Anne; Gengembre, Nicolas

    2010-01-01

    In this paper, two data assimilation methods based on sequential Monte Carlo sampling are studied and compared: the ensemble Kalman filter and the particle filter. Each of these techniques has its own advantages and drawbacks. In this work, we try to get the best of each method by combining them. The proposed algorithm, called the weighted ensemble Kalman filter, consists to rely on the Ensemble Kalman Filter updates of samples in order to define a proposal distribution for the particle filte...

  1. The dynamics of exploitation in ensembles of source and sink

    OpenAIRE

    Friedrich, T.

    2012-01-01

    The ensemble is a new entity on a higher level of complexity composed of source and sink. When substrate is transferred from source to sink within the transfer space or the ensemble space non-linearity is observed. Saturating production functions of source and sink in combination with linear cost functions generate superadditivity and subadditivity in the productivity of the ensemble. In a reaction chain the source produces a product that will be used by the sink to produce a different pr...

  2. Enhanced Sampling in the Well-Tempered Ensemble

    OpenAIRE

    Bonomi, M.; Parrinello, M

    2009-01-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the ...

  3. Triticeae resources in Ensembl Plants.

    Science.gov (United States)

    Bolser, Dan M; Kerhornou, Arnaud; Walts, Brandon; Kersey, Paul

    2015-01-01

    Recent developments in DNA sequencing have enabled the large and complex genomes of many crop species to be determined for the first time, even those previously intractable due to their polyploid nature. Indeed, over the course of the last 2 years, the genome sequences of several commercially important cereals, notably barley and bread wheat, have become available, as well as those of related wild species. While still incomplete, comparison with other, more completely assembled species suggests that coverage of genic regions is likely to be high. Ensembl Plants (http://plants.ensembl.org) is an integrative resource organizing, analyzing and visualizing genome-scale information for important crop and model plants. Available data include reference genome sequence, variant loci, gene models and functional annotation. For variant loci, individual and population genotypes, linkage information and, where available, phenotypic information are shown. Comparative analyses are performed on DNA and protein sequence alignments. The resulting genome alignments and gene trees, representing the implied evolutionary history of the gene family, are made available for visualization and analysis. Driven by the case of bread wheat, specific extensions to the analysis pipelines and web interface have recently been developed to support polyploid genomes. Data in Ensembl Plants is accessible through a genome browser incorporating various specialist interfaces for different data types, and through a variety of additional methods for programmatic access and data mining. These interfaces are consistent with those offered through the Ensembl interface for the genomes of non-plant species, including those of plant pathogens, pests and pollinators, facilitating the study of the plant in its environment. PMID:25432969

  4. Conductor gestures influence evaluations of ensemble performance.

    Science.gov (United States)

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  5. A Localized Ensemble Kalman Smoother

    Science.gov (United States)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  6. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Meihong Wu

    2016-01-01

    Full Text Available Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn and average stride interval (ASI parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p<0.01 in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years, middle (aged 6–8 years, and elder (aged 10–14 years children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children’s gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%, recall (≥0.8, and precision (≥0.8077.

  7. A cost-minimization analysis of combination therapy in hypertension: fixed-dose vs extemporary combinations

    Directory of Open Access Journals (Sweden)

    Marco Bellone

    2013-12-01

    Full Text Available BACKGROUND: Cardiovascular disease management and prevention represent the leading cost driver in Italian healthcare expenditure. In order to reach the target blood pressure, a large majority of patients require simultaneous administration of multiple antihypertensive agents.OBJECTIVE: To assess the economic impact of the use of fixed dose combinations of antihypertensive agents, compared to the extemporary combination of the same principles.METHODS: A cost minimization analysis was conducted to determine the pharmaceutical daily cost of five fixed dose combinations (olmesartan 20 mg + amlodipine 5 mg, perindopril 5 mg + amlodipine 5 mg, enalapril 20 mg + lercanidipine 10 mg, felodipine 5 mg + ramipril 5 mg, and delapril 30 mg + manidipine 10 mg compared with extemporary combination of the same principles in the perspective of the Italian NHS. Daily acquisition costs are estimated based on current Italian prices and tariffs.RESULTS: In three cases the use of fixed‑dose combination instead of extemporary combination induces a lower daily cost. Fixed combination treatment with delapril 30 mg + manidipine 10 mg induces greater cost savings for the National Health System (95,47 €/pts/year, as compared to free drugs combination therapy.CONCLUSIONS: Compared with free drug combinations, fixed‑dose combinations of antihypertensive agents are associated with lower daily National Health Service acquisition costs.http://dx.doi.org/10.7175/fe.v14i4.886

  8. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  9. Exact analysis of Packet Reversed Packet Combining Scheme and Modified Packet Combining Scheme; and a combined scheme

    International Nuclear Information System (INIS)

    Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)

  10. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  11. Exergy Analysis of Combined Cycle Power Plant: NTPC Dadri, India

    OpenAIRE

    Tiwari, Arvind Kumar; M. M. Hasan; Islam, Mohd.

    2013-01-01

    The aim of the present paper is to exergy analysis of combined Brayton/Rankine power cycle of NTPC Dadri India. Theoretical exergy analysis is carried out for different components of dadri combined cycle power plant which consists of a gas turbine unit, heat recovery steam generator without extra fuel consumption and steam turbine unit. The results pinpoint that more exergy losses occurred in the gas turbine combustion chamber. Its reached 35% of the total exergy losses while the exergy losse...

  12. Exergy Analysis of Combined Cycle Power Plant: NTPC Dadri, India

    OpenAIRE

    Arvind Kumar Tiwari; M. M. Hasan; Mohd Islam,

    2012-01-01

    The aim of the present paper is to exergy analysis of combined Brayton/Rankine power cycle of NTPC Dadri India. Theoretical exergy analysis is carried out for different components of dadri combined cycle power plant which consists of a gas turbine unit, heat recovery steam generator without extra fuel consumption and steam turbine unit. The results pinpoint that more exergy losses occurred in the gas turbine combustion chamber. Its reached 35% of the total exergy losses while the exergy losse...

  13. A cost-minimization analysis of combination therapy in hypertension: fixed-dose vs extemporary combinations

    OpenAIRE

    Marco Bellone; Pierluigi Sbarra

    2013-01-01

    BACKGROUND: Cardiovascular disease management and prevention represent the leading cost driver in Italian healthcare expenditure. In order to reach the target blood pressure, a large majority of patients require simultaneous administration of multiple antihypertensive agents.OBJECTIVE: To assess the economic impact of the use of fixed dose combinations of antihypertensive agents, compared to the extemporary combination of the same principles.METHODS: A cost minimization analysis was conducted...

  14. Estimating combining ability in popcorn lines using multivariate analysis

    OpenAIRE

    Leandro Simôes Azeredo Gonçalves; Silverio de Paiva Freitas Júnior; Antônio Teixeira do Amaral Júnior; Carlos Alberto Scapim; Rosana Rodrigues; Caillet Dornelles Marinho; Eduardo Stefani Pagliosa

    2014-01-01

    Aiming to estimate the combining ability in tropical and temperate popcorn (Zea mays L. var. everta Sturt.) lines using multivariate analysis, ten popcorn lines were crossed in a complete diallel without reciprocals and the lines and hybrids were tested in two randomized complete block experiments with three replicates. Data were subjected to univariate and multivariate ANOVA, principal component analysis, and univariate and multivariate diallel analysis. For multivariate diallel analysis, va...

  15. Ensemble Forecasting of Major Solar Flares

    CERN Document Server

    Guerra, J A; Uritsky, V M

    2015-01-01

    We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm by applying a decision threshold $P_{th}$ to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the probabilities and events time series from 13 recent solar active regions (2012 - 2014), we found that a linear combination of probabilities can improve both probabilistic and categorical forecasts. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of $P_{th}$. According to the maximum values of HSS, a performance-based weights ...

  16. Imprinting and recalling cortical ensembles.

    Science.gov (United States)

    Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael

    2016-08-12

    Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. PMID:27516599

  17. Disease-associated mutations that alter the RNA structural ensemble.

    Directory of Open Access Journals (Sweden)

    Matthew Halvorsen

    2010-08-01

    Full Text Available Genome-wide association studies (GWAS often identify disease-associated mutations in intergenic and non-coding regions of the genome. Given the high percentage of the human genome that is transcribed, we postulate that for some observed associations the disease phenotype is caused by a structural rearrangement in a regulatory region of the RNA transcript. To identify such mutations, we have performed a genome-wide analysis of all known disease-associated Single Nucleotide Polymorphisms (SNPs from the Human Gene Mutation Database (HGMD that map to the untranslated regions (UTRs of a gene. Rather than using minimum free energy approaches (e.g. mFold, we use a partition function calculation that takes into consideration the ensemble of possible RNA conformations for a given sequence. We identified in the human genome disease-associated SNPs that significantly alter the global conformation of the UTR to which they map. For six disease-states (Hyperferritinemia Cataract Syndrome, beta-Thalassemia, Cartilage-Hair Hypoplasia, Retinoblastoma, Chronic Obstructive Pulmonary Disease (COPD, and Hypertension, we identified multiple SNPs in UTRs that alter the mRNA structural ensemble of the associated genes. Using a Boltzmann sampling procedure for sub-optimal RNA structures, we are able to characterize and visualize the nature of the conformational changes induced by the disease-associated mutations in the structural ensemble. We observe in several cases (specifically the 5' UTRs of FTL and RB1 SNP-induced conformational changes analogous to those observed in bacterial regulatory Riboswitches when specific ligands bind. We propose that the UTR and SNP combinations we identify constitute a "RiboSNitch," that is a regulatory RNA in which a specific SNP has a structural consequence that results in a disease phenotype. Our SNPfold algorithm can help identify RiboSNitches by leveraging GWAS data and an analysis of the mRNA structural ensemble.

  18. Embedded feature ranking for ensemble MLP classifiers

    OpenAIRE

    Windeatt, T; Duangsoithong, R; Smith, R

    2011-01-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  19. A multisite seasonal ensemble streamflow forecasting technique

    Science.gov (United States)

    Bracken, Cameron; Rajagopalan, Balaji; Prairie, James

    2010-03-01

    We present a technique for providing seasonal ensemble streamflow forecasts at several locations simultaneously on a river network. The framework is an integration of two recent approaches: the nonparametric multimodel ensemble forecast technique and the nonparametric space-time disaggregation technique. The four main components of the proposed framework are as follows: (1) an index gauge streamflow is constructed as the sum of flows at all the desired spatial locations; (2) potential predictors of the spring season (April-July) streamflow at this index gauge are identified from the large-scale ocean-atmosphere-land system, including snow water equivalent; (3) the multimodel ensemble forecast approach is used to generate the ensemble flow forecast at the index gauge; and (4) the ensembles are disaggregated using a nonparametric space-time disaggregation technique resulting in forecast ensembles at the desired locations and for all the months within the season. We demonstrate the utility of this technique in skillful forecast of spring seasonal streamflows at four locations in the Upper Colorado River Basin at different lead times. Where applicable, we compare the forecasts to the Colorado Basin River Forecast Center's Ensemble Streamflow Prediction (ESP) and the National Resource Conservation Service "coordinated" forecast, which is a combination of the ESP, Statistical Water Supply, a principal component regression technique, and modeler knowledge. We find that overall, the proposed method is equally skillful to existing operational models while tending to better predict wet years. The forecasts from this approach can be a valuable input for efficient planning and management of water resources in the basin.

  20. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  1. Ensemble approach for differentiation of malignant melanoma

    Science.gov (United States)

    Rastgoo, Mojdeh; Morel, Olivier; Marzani, Franck; Garcia, Rafael

    2015-04-01

    Melanoma is the deadliest type of skin cancer, yet it is the most treatable kind depending on its early diagnosis. The early prognosis of melanoma is a challenging task for both clinicians and dermatologists. Due to the importance of early diagnosis and in order to assist the dermatologists, we propose an automated framework based on ensemble learning methods and dermoscopy images to differentiate melanoma from dysplastic and benign lesions. The evaluation of our framework on the recent and public dermoscopy benchmark (PH2 dataset) indicates the potential of proposed method. Our evaluation, using only global features, revealed that ensembles such as random forest perform better than single learner. Using random forest ensemble and combination of color and texture features, our framework achieved the highest sensitivity of 94% and specificity of 92%.

  2. On a Combined Analysis Framework for Multimodal Discourse Analysis

    Institute of Scientific and Technical Information of China (English)

    窦瑞芳

    2015-01-01

    When people communicate,they do not only use language,that is,a single mode of communication,but also simultaneously use body languages,eye contacts,pictures,etc,which is called multimodal communication.The multimodal communication,as a matter of fact,is the most natural way of communication.Therefore,in order to make a complete discourse analysis,all the modes involved in an interaction or discourse should be taken into account and the new analysis framework for Multimodal Discourse Analysis ought to be created to move forward such type of analysis.In this passage,the author makes a tentative move to shape a new analysis framework for Multimodal Discourse Analysis.

  3. On a Combined Analysis Framework for Multimodal Discourse Analysis

    Institute of Scientific and Technical Information of China (English)

    窦瑞芳

    2015-01-01

    When people communicate,they do not only use language,that is,a single mode of communication,but also simultaneously use body languages,eye contacts,pictures,etc,which is called multimodal communication. The multimodal communication,as a matter of fact,is the most natural way of communication.Therefore,in order to make a complete discourse analysis,all the modes involved in an interaction or discourse should be taken into account and the new analysis framework for Multimodal Discourse Analysis ought to be created to move forward such type of analysis.In this passage,the author makes a tentative move to shape a new analysis framework for Multimodal Discourse Analysis.

  4. Analysis of pattern forming instabilities in an ensemble of two-level atoms optically excited by counter-propagating fields

    CERN Document Server

    Firth, W J; Labeyrie, G; Camara, A; Gomes, P; Ackemann, T

    2016-01-01

    We explore various models for the pattern forming instability in a laser-driven cloud of cold two-level atoms with a plane feedback mirror. Focus is on the combined treatment of nonlinear propagation in a diffractively thick medium and the boundary condition given by feedback. The combined presence of purely transverse transmission gratings and reflection gratings on wavelength scale is addressed. Different truncation levels of the Fourier expansion of the dielectric susceptibility in terms of these gratings are discussed and compared to literature. A formalism to calculate the exact solution for the homogenous state in presence of absorption is presented. The relationship between the counterpropagating beam instability and the feedback instability is discussed. Feedback reduces the threshold by a factor of two under optimal conditions. Envelope curves which bound all possible threshold curves for varying mirror distances are calculated. The results are comparing well to experimental results regarding the obs...

  5. A crop model ensemble analysis of temperature and precipitation effects on wheat yield across a European transect using impact response surfaces"

    OpenAIRE

    Pirttioja, Nina; Carter, Timothy; Fronzek, Stefan; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; RuizRamos, Margarita; Trnka, Miroslav; Acutis, Marco; Asseng, Senthold; Baranowski, Piotr; Basso, Bruno; Bodin, Per; Buis, Samuel; Cammarano, Davide

    2015-01-01

    This study aims to explore the utility of the impact response surface (IRS) approach for investigating model ensemble crop yield responses under a large range of changes in climate. IRSs of spring and winter wheat (Triticum aestivum) yields were constructed from a 26-member ensemble of process-based crop simulation models for sites in Finland, Germany and Spain across a latitudinal transect in Europe. The sensitivity of modelled yield to systematic increments of changes in temperature (-2 to ...

  6. Minimalist ensemble algorithms for genome-wide protein localization prediction

    Directory of Open Access Journals (Sweden)

    Lin Jhih-Rong

    2012-07-01

    Full Text Available Abstract Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual

  7. Kingston Soundpainting Ensemble

    OpenAIRE

    Minors, Helen Julia

    2012-01-01

    This performance is designed to introduce teachers and school musicians to this live multidisciplinary live composing sign language. Led by Dr. Helen Julia Minors (soundpainter, trumpet, voice), the Kingston Soundpainting Ensemble, led by Dr. Minors at Kington University, is representated by a section a varied set of performers, using woodwind, brass, voice and percussion, spanning popular, classical and world styles. This performance consists of: Philip Warda (electronic instruments,...

  8. Assessing uncertainties in flood forecasts for decision making: prototype of an operational flood management system integrating ensemble predictions

    Directory of Open Access Journals (Sweden)

    J. Dietrich

    2009-08-01

    Full Text Available Ensemble forecasts aim at framing the uncertainties of the potential future development of the hydro-meteorological situation. A probabilistic evaluation can be used to communicate forecast uncertainty to decision makers. Here an operational system for ensemble based flood forecasting is presented, which combines forecasts from the European COSMO-LEPS, SRNWP-PEPS and COSMO-DE prediction systems. A multi-model lagged average super-ensemble is generated by recombining members from different runs of these meteorological forecast systems. A subset of the super-ensemble is selected based on a priori model weights, which are obtained from ensemble calibration. Flood forecasts are simulated by the conceptual rainfall-runoff-model ArcEGMO. Parameter uncertainty of the model is represented by a parameter ensemble, which is a priori generated from a comprehensive uncertainty analysis during model calibration. The use of a computationally efficient hydrological model within a flood management system allows us to compute the hydro-meteorological model chain for all members of the sub-ensemble. The model chain is not re-computed before new ensemble forecasts are available, but the probabilistic assessment of the output is updated when new information from deterministic short range forecasts or from assimilation of measured data becomes available. For hydraulic modelling, with the desired result of a probabilistic inundation map with high spatial resolution, a replacement model can help to overcome computational limitations. A prototype of the developed framework has been applied for a case study in the Mulde river basin. However these techniques, in particular the probabilistic assessment and the derivation of decision rules are still in their infancy. Further research is necessary and promising.

  9. Optimizing matching and analysis combinations for estimating causal effects

    Science.gov (United States)

    Colson, K. Ellicott; Rudolph, Kara E.; Zimmerman, Scott C.; Goin, Dana E.; Stuart, Elizabeth A.; Laan, Mark Van Der; Ahern, Jennifer

    2016-03-01

    Matching methods are common in studies across many disciplines. However, there is limited evidence on how to optimally combine matching with subsequent analysis approaches to minimize bias and maximize efficiency for the quantity of interest. We conducted simulations to compare the performance of a wide variety of matching methods and analysis approaches in terms of bias, variance, and mean squared error (MSE). We then compared these approaches in an applied example of an employment training program. The results indicate that combining full matching with double robust analysis performed best in both the simulations and the applied example, particularly when combined with machine learning estimation methods. To reduce bias, current guidelines advise researchers to select the technique with the best post-matching covariate balance, but this work finds that such an approach does not always minimize mean squared error (MSE). These findings have important implications for future research utilizing matching. To minimize MSE, investigators should consider additional diagnostics, and use of simulations tailored to the study of interest to identify the optimal matching and analysis combination.

  10. Monitoring of Orientation in Molecular Ensembles by Polarization Sensitive Nonlinear Microscopy

    OpenAIRE

    Floc'h, Veronique Le; Brasselet, Sophie; Roch, Jean-Francois; Zyss, Joseph

    2003-01-01

    We present high resolution two-photon excitation microscopy studies combining two-photon fluorescence (TPF) and second harmonic generation (SHG) in order to probe orientational distributions of molecular ensembles at room temperature. A detailed polarization analysis of TPF and SHG signals is used in order to unravel the parameters of the molecular orientational statistical distribution, using a technique which can be extended and generalized to a broad variety of molecular arrangements. A po...

  11. Predictability of Regional Climate: A Bayesian Approach to Analysing a WRF Model Ensemble

    Science.gov (United States)

    Bruyere, C. L.; Mesquita, M. D. S.; Paimazumder, D.

    2013-12-01

    This study investigates aspects of climate predictability with a focus on climatic variables and different characteristics of extremes over nine North American climatic regions and two selected Atlantic sectors. An ensemble of state-of-the-art Weather Research and Forecasting Model (WRF) simulations is used for the analysis. The ensemble is comprised of a combination of various physics schemes, initial conditions, domain sizes, boundary conditions and breeding techniques. The main objectives of this research are: 1) to increase our understanding of the ability of WRF to capture regional climate information - both at the individual and collective ensemble members, 2) to investigate the role of different members and their synergy in reproducing regional climate 3) to estimate the associated uncertainty. In this study, we propose a Bayesian framework to study the predictability of extremes and associated uncertainties in order to provide a wealth of knowledge about WRF reliability and provide further clarity and understanding of the sensitivities and optimal combinations. The choice of the Bayesian model, as opposed to standard methods, is made because: a) this method has a mean square error that is less than standard statistics, which makes it a more robust method; b) it allows for the use of small sample sizes, which are typical in high-resolution modeling; c) it provides a probabilistic view of uncertainty, which is useful when making decisions concerning ensemble members.

  12. Assessment of the EDA (Ensemble of data assimilation) technique as a tool for estimating the uncertainty of a prediction and the impact of observations

    Science.gov (United States)

    Megner, L.; Körnich, H.; Isaksen, L.; Tan, D.; Horanyi, A.

    2012-12-01

    A prediction increase significantly in value with knowledge of how certain it is, that is the size of its error. In weather forecasts it is often difficult to determine this error, even after the time of validity of the prediction, since the precise true state of the atmosphere remains unknown. For Ensemble Kalman filter methods the forecast spread of the ensemble can be used to estimate the uncertainty. However, most operational weather prediction systems today use the technique of variational data assimilation, which lacks a straight forward way to estimate the uncertainty. Lately the variational data assimilation and the ensemble prediction technique have been combined in the so-called EDA (ensemble of data assimilations) technique, to improve the prediction that the variational analysis can provide, and at the same time give an estimate of the uncertainty. The EDA technique consists of an ensemble of standard 4D-Var data assimilations, where the ensemble members have been randomly perturbed. The uncertainty can then be determined from the size of the ensemble spread, provided that there is a linear relationship between the magnitude of the perturbation and the resulting EDA spread. We show that such a linear relationship indeed exists and that the EDA technique can be scaled to provide a practical alternative to the traditional observing system experiment (OSE) technique, both for estimating the uncertainty of a prediction and a tool for assessing the impact of observations.

  13. Temperature and precipitation effects on wheat yield across a European transect: a crop model ensemble analysis using impact response surfaces

    Czech Academy of Sciences Publication Activity Database

    Pirttioja, N. K.; Carter, T. R.; Fronzek, S.; Bindi, M.; Hoffmann, H. D.; Palosuo, T.; Ruiz-Ramos, M.; Tao, F.; Trnka, Miroslav; Acutis, M.; Asseng, S.; Baranowski, P.; Basso, B.; Bodin, P.; Buis, S.; Cammarano, D.; Deligios, P.; Destain, M. F.; Dumont, B.; Ewert, F.; Ferrise, R.; Francois, L.; Gaiser, T.; Hlavinka, Petr; Jacquemin, I.; Kersebaum, K. C.; Kollas, C.; Krzyszczak, J.; Lorite, I. J.; Minet, J.; Minquez, M. I.; Montesino, M.; Moriondo, M.; Müller, C.; Nendel, C.; Öztürk, I.; Perego, A.; Rodriguez, A.; Ruane, A. C.; Ruget, F.; Sanna, M.; Semenov, M. A.; Slawinski, C.; Stratonovitch, P.; Supit, I.; Waha, K.; Wang, E.; Wu, L.; Zhao, Z.; Rötter, R. P.

    2015-01-01

    Roč. 65, č. 31 (2015), s. 87-105. ISSN 0936-577X R&D Projects: GA MZe QJ1310123; GA MŠk(CZ) LD13030 Grant ostatní: German Federal Ministries of Education and Research , and Food and Agriculture(DE) 2812ERA115 Institutional support: RVO:67179843 Keywords : climate * crop model * impact response surface * IRS * sensitivity analysis * wheat * yield Subject RIV: EH - Ecology, Behaviour Impact factor: 2.496, year: 2014

  14. Analysis and projections of climate change impacts on flood risks in the Dniester river basin based on the ENSEMBLES RCM data

    Science.gov (United States)

    Krakovska, S.; Balabukh, V.; Palamarchuk, L.; Djukel, G.; Gnatiuk, N.

    2012-04-01

    intensive rises of surface air temperature and average temperature of the troposphere (a thickness of 1000-500hPa layer) were found in the investigated region that together with increase of moisture content of the atmosphere led to rise of free convection level and convectively unstable layers of the atmosphere reached almost to 100hPa. The later resulted in an essential increase (almost twice) of Convective Available Potential Energy (CAPE) and, accordingly, speed of updrafts. Ensemble of seven runs of Regional Climate Models (RCM) driven by four Atmosphere and Ocean General Circulation Models (AOGCM) from the ENSEMBLES database was applied in order to obtain projected values of air temperature and precipitation changes for 2021-2050 period within the Dniester basin on a monthly basis. To make calculations more accurate the Dniester basin was subdivided into 3 regions every with 2 subregions according to river geomorphology and topography. Verification of RCM on control 1971-2000 period by E-Obs and stations' data has allowed to obtain optimum ensembles of RCM for every subregion and climate characteristic. Note, that just two regional climate models REMO and RCA both driven by ECHAM5 provided the best results either for all delineated regions or for the entire Dniester basin. Projections for 2021-2050 period were calculated from the same obtained optimum ensembles of RCM as for the control one. More or less uniform air temperature rise is expected in all subregions and months by 0.7-1.7 oC. But projections for precipitation change are more disperse: within a few per cents for annual sums, but almost 20% less for the middle and lower Dniester in August and October (drought risk) and over 15% more for the high flow of the river in September and December (flood risk). Indices of extremes recommended by ECA&D were calculated from daily data of REMO and RCA A1B runs for control and projected periods. The analysis of precipitation extremes (SDII, RX1day, RX5day, etc.) has

  15. Ensembles and their modules as objects of cartosemiotic inquiry

    Directory of Open Access Journals (Sweden)

    Hansgeorg Schlichtmann

    2010-01-01

    Full Text Available The structured set of signs in a map face -- here called map-face aggregate or MFA -- and the associated marginal notes make up an ensemble of modules or components (modular ensemble. Such ensembles are recognized where groups of entries are intuitively viewed as complex units, which includes the case that entries are consulted jointly and thus are involved in the same process of sign reception. Modular ensembles are amenable to semiotic study, just as are written or pictorial stories. Four kinds (one of them mentioned above are discussed in detail, two involving single MFAs, the other two being assemblages of maps, such as atlases. In terms of their internal structure, two types are recognized: the combinate (or grouping, in which modules are directly linked by combinatorial relations (example above, and the cumulate (or collection (of documents, in which modules are indirectly related through some conceptual commonality (example: series of geological maps. The discussion then turns to basic points concerning modular ensembles (identification of a module, internal organization of an ensemble, and characteristics which establish an ensemble as a unit and further to a few general semiotic concepts as they relate to the present research. Since this paper originated as a reaction to several of A. Wolodtschenko’s recent publications, it concludes with comments on some of his arguments which pertain to modular ensembles.

  16. Ensemble forecasting of major solar flares: First results

    Science.gov (United States)

    Guerra, J. A.; Pulkkinen, A.; Uritsky, V. M.

    2015-10-01

    We present the results from the first ensemble prediction model for major solar flares (M and X classes). The primary aim of this investigation is to explore the construction of an ensemble for an initial prototyping of this new concept. Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm that applies a decision threshold Pth to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the data for 13 recent solar active regions between years 2012 and 2014, we found that linear combination methods can improve the overall probabilistic prediction and improve the categorical prediction for certain values of decision thresholds. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of Pth. According to the maximum values of HSS, a performance-based weights calculated by averaging over the sample, performed similarly to a equally weighted model. The values Pth for which the ensemble forecast performs the best are 25% for M-class flares and 15% for X-class flares. When the human-adjusted probabilities from NOAA are excluded from the ensemble, the ensemble performance in terms of the Heidke score is reduced.

  17. Validation of multi-input ensemble simulation with a spatially distributed hydrological model in Rijnland, the Netherlands

    Science.gov (United States)

    Hartanto, Isnaeni; van Andel, Schalk Jan; Alexandridis, Thomas; Jonoski, Andreja; Solomatine, Dimitri

    2014-05-01

    There are many hydrological data sources that are available from in-situ measurements, remote sensing, and atmospheric modelling, and that can be used to improve water management and understanding of hydrological processes. Each source comes with its own strengths and weaknesses, whether it is in accuracy, availability, measurement frequency, coverage or spatial resolution. By using multiple combinations of available data sources as input to a hydrological model, an ensemble prediction can be generated. Multi-model ensemble methods originate from the use of different numerical weather prediction (and climate) models. Most research on multi-model hydro-meteorological ensemble prediction has been done on the basis of different hydrological models. With the increase of reliable hydro-meteorological data sources, a re-visit of the multi-model approach is warranted, focussing on multiple combinations of inputs and models. In this paper, multiple data sources are fed into a hydrological model, resulting in an ensemble of model outputs. The data sources used to generate ensemble members include 2 precipitation sources from in-situ stations and ground-based radar, 3 land use maps from local origin and from satellite estimates, and 2 evapotranspiration estimates from in-situ measured reference evaporation and from satellite estimates through surface energy balance analysis. The land use data were generated by spectral classification of SPOT satellite images, and the remotely sensed evapotranspiration by solving the surface energy equation using Terra MODIS satellite images. The spatially distributed hydrological modelling system SIMGRO is used. The model simulates hydrological process of the Rijnland area in the Netherlands. The ensemble output is analysed by comparing model outputs with observed discharge. The results will be presented and serve to discuss the advantages and disadvantages of applying the multi-input ensemble approach for hydrological prediction.

  18. Total probabilities of ensemble runoff forecasts

    Science.gov (United States)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative

  19. Selecting supplier combination based on fuzzy multicriteria analysis

    Science.gov (United States)

    Han, Zhi-Qiu; Luo, Xin-Xing; Chen, Xiao-Hong; Yang, Wu-E.

    2015-07-01

    Existing multicriteria analysis (MCA) methods are probably ineffective in selecting a supplier combination. Thus, an MCA-based fuzzy 0-1 programming method is introduced. The programming relates to a simple MCA matrix that is used to select a single supplier. By solving the programming, the most feasible combination of suppliers is selected. Importantly, this result differs from selecting suppliers one by one according to a single-selection order, which is used to rank sole suppliers in existing MCA methods. An example highlights such difference and illustrates the proposed method.

  20. Multilevel ensemble Kalman filtering

    KAUST Repository

    Hoel, Hakon

    2016-06-14

    This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  1. Image Combination Analysis in SPECAN Algorithm of Spaceborne SAR

    Institute of Scientific and Technical Information of China (English)

    臧铁飞; 李方慧; 龙腾

    2003-01-01

    An analysis of image combination in SPECAN algorithm is delivered in time-frequency domain in detail and a new image combination method is proposed. For four multi-looks processing one sub-aperture data in every three sub-apertures is processed in this combination method. The continual sub-aperture processing in SPECAN algorithm is realized and the processing efficiency can be dramatically increased. A new parameter is also put forward to measure the processing efficient of SAR image processing. Finally, the raw data of RADARSAT are used to test the method and the result proves that this method is feasible to be used in SPECAN algorithm of spaceborne SAR and can improve processing efficiently. SPECAN algorithm with this method can be used in quick-look imaging.

  2. Meta analysis a guide to calibrating and combining statistical evidence

    CERN Document Server

    Kulinskaya, Elena; Staudte, Robert G

    2008-01-01

    Meta Analysis: A Guide to Calibrating and Combining Statistical Evidence acts as a source of basic methods for scientists wanting to combine evidence from different experiments. The authors aim to promote a deeper understanding of the notion of statistical evidence.The book is comprised of two parts - The Handbook, and The Theory. The Handbook is a guide for combining and interpreting experimental evidence to solve standard statistical problems. This section allows someone with a rudimentary knowledge in general statistics to apply the methods. The Theory provides the motivation, theory and results of simulation experiments to justify the methodology.This is a coherent introduction to the statistical concepts required to understand the authors' thesis that evidence in a test statistic can often be calibrated when transformed to the right scale.

  3. Conductor gestures influence evaluations of ensemble performance

    Directory of Open Access Journals (Sweden)

    Steven eMorrison

    2014-07-01

    Full Text Available Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance, articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and nonmajors (N = 285 viewed sixteen 30-second performances and evaluated the quality of the ensemble’s articulation, dynamics, technique and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  4. Nest-site selection analysis of hooded crane (Grus monacha) in Northeastern China based on a multivariate ensemble model.

    Science.gov (United States)

    Jiao, Shengwu; Guo, Yumin; Huettmann, Falk; Lei, Guangchun

    2014-07-01

    Avian nest-site selection is an important research and management subject. The hooded crane (Grus monacha) is a vulnerable (VU) species according to the IUCN Red List. Here, we present the first long-term Chinese legacy nest data for this species (1993-2010) with publicly available metadata. Further, we provide the first study that reports findings on multivariate nest habitat preference using such long-term field data for this species. Our work was carried out in Northeastern China, where we found and measured 24 nests and 81 randomly selected control plots and their environmental parameters in a vast landscape. We used machine learning (stochastic boosted regression trees) to quantify nest selection. Our analysis further included varclust (R Hmisc) and (TreenNet) to address statistical correlations and two-way interactions. We found that from an initial list of 14 measured field variables, water area (+), water depth (+) and shrub coverage (-) were the main explanatory variables that contributed to hooded crane nest-site selection. Agricultural sites played a smaller role in the selection of these nests. Our results are important for the conservation management of cranes all over East Asia and constitute a defensible and quantitative basis for predictive models. PMID:25001914

  5. Exergy Analysis of Combined Cycle Power Plant: NTPC Dadri, India

    Directory of Open Access Journals (Sweden)

    Arvind Kumar Tiwari

    2012-12-01

    Full Text Available The aim of the present paper is to exergy analysis of combined Brayton/Rankine power cycle of NTPC Dadri India. Theoretical exergy analysis is carried out for different components of dadri combined cycle power plant which consists of a gas turbine unit, heat recovery steam generator without extra fuel consumption and steam turbine unit. The results pinpoint that more exergy losses occurred in the gas turbine combustion chamber. Its reached 35% of the total exergy losses while the exergy losses in other plant components are between 7% -21% of the total exergy losses at 1400o C turbine inlet temperature and pressure ratio 10 .This paper also considered the effect of the pressure ratio, turbine inlet temperature, pressure drop in combustion chamber and heat recovery steam generator on the exergy losses in the plant, there are a clear effects in the exergy losses when changing pressure ratio, turbine inlet temperature.

  6. Ensemble Data Assimilation: Algorithms and Software

    OpenAIRE

    Nerger, Lars

    2014-01-01

    Ensemble data assimilation is nowadays applied to various problems to estimate a model state and model parameters by combining the model predictions with observational data. At the Alfred Wegener Institute, the assimilation focuses on ocean-sea ice models and coupled ocean-biogeochemical models. The high dimension of realistic models requires particularly efficient algorithms that are also usable on supercomputers. For the application of such filters, the Parallel Data Assimilation Framework ...

  7. Attenuation Analysis and Acoustic Pressure Levels for Combined Absorptive Mufflers

    Directory of Open Access Journals (Sweden)

    Ovidiu Vasile

    2011-09-01

    Full Text Available The paper describes the pressure-wave propagation in a muffler for an internal combustion engine in case of two combined mufflers geometry. The approach is generally applicable to analyzing the damping of propagation of harmonic pressure waves. The paper purpose is to show finite elements analysis of both inductive and resistive damping in pressure acoustics. The main output is the attenuation and acoustic pressure levels for the frequency range 50 Hz–3000 Hz.

  8. Performance analysis and modeling of energy from waste combined cycles

    International Nuclear Information System (INIS)

    Municipal solid waste (MSW) is produced in a substantial amount with minimal fluctuations throughout the year. The analysis of carbon neutrality of MSW on a life cycle basis shows that MSW is about 67% carbon-neutral, suggesting that only 33% of the CO2 emissions from incinerating MSW are of fossil origin. The waste constitutes a 'renewable biofuel' energy resource and energy from waste (EfW) can result in a net reduction in CO2 emissions. In this paper, we explore an approach to extracting energy from MSW efficiently - EfW/gas turbine hybrid combined cycles. This approach innovates by delivering better performance with respect to energy efficiency and CO2 mitigation. In the combined cycles, the topping cycle consists of a gas turbine, while the bottoming cycle is a steam cycle where the low quality fuel - waste is utilized. This paper assesses the viability of the hybrid combined cycles and analyses their thermodynamic advantages with the help of computer simulations. It was shown that the combined cycles could offer significantly higher energy conversion efficiency and a practical solution to handling MSW. Also, the potential for a net reduction in CO2 emissions resulting from the hybrid combined cycles was evaluated.

  9. Combined multi-criteria and cost-benefit analysis

    DEFF Research Database (Denmark)

    Moshøj, Claus Rehfeld

    1996-01-01

    The paper is an introduction to both theory and application of combined Cost-Benefit and Multi-Criteria Analysis. The first section is devoted to basic utility theory and its practical application in Cost-Benefit Analysis. Based on some of the problems encountered, arguments in favour of the...... application of utility-based Multi-Criteria Analyses methods as an extension and refinement of the traditional Cost-Benefit Analysis are provided. The theory presented in this paper is closely related the methods used in the WARP software (Leleur & Jensen, 1989). The presentation is however wider in scope.......The second section introduces the stated preference methodology used in WARP to create weight profiles for project pool sensitivity analysis. This section includes a simple example. The third section discusses how decision makers can get a priori aid to make their pair-wise comparisons based on project pool...

  10. Estimating combining ability in popcorn lines using multivariate analysis

    Directory of Open Access Journals (Sweden)

    Leandro Simôes Azeredo Gonçalves

    2014-03-01

    Full Text Available Aiming to estimate the combining ability in tropical and temperate popcorn (Zea mays L. var. everta Sturt. lines using multivariate analysis, ten popcorn lines were crossed in a complete diallel without reciprocals and the lines and hybrids were tested in two randomized complete block experiments with three replicates. Data were subjected to univariate and multivariate ANOVA, principal component analysis, and univariate and multivariate diallel analysis. For multivariate diallel analysis, variables were divided into group I (grain yield, mean weight of ears with grains, popping expansion, mean number of ears per plant, and final stand and group II (days to silking, plant height, first ear height, and lodged or broken plants. The P2 line had positive values for agronomic traits related to yield and popping expansion for group I, whereas the P4 line had fewer days to silking and lodged or broken plants for group II. Regarding the hybrids, P2 x P7 exhibited favorable values for most of the analyzed variables and had potential for recommendation. The multivariate diallel analysis can be useful in popcorn genetic improvement programs, particularly when directed toward the best cross combinations, where the objective is to simultaneously obtain genetic gains in multiple traits.

  11. Meta-analysis for pathway enrichment analysis when combining multiple genomic studies

    OpenAIRE

    Shen, Kui; Tseng, George C.

    2010-01-01

    Motivation: Many pathway analysis (or gene set enrichment analysis) methods have been developed to identify enriched pathways under different biological states within a genomic study. As more and more microarray datasets accumulate, meta-analysis methods have also been developed to integrate information among multiple studies. Currently, most meta-analysis methods for combining genomic studies focus on biomarker detection and meta-analysis for pathway analysis has not been systematically purs...

  12. Combined cardiotocographic and ST event analysis: A review.

    Science.gov (United States)

    Amer-Wahlin, Isis; Kwee, Anneke

    2016-01-01

    ST-analysis of the fetal electrocardiogram (ECG) (STAN(®)) combined with cardiotocography (CTG) for intrapartum fetal monitoring has been developed following many years of animal research. Changes in the ST-segment of the fetal ECG correlated with fetal hypoxia occurring during labor. In 1993 the first randomized controlled trial (RCT), comparing CTG with CTG + ST-analysis was published. STAN(®) was introduced for daily practice in 2000. To date, six RCTs have been performed, out of which five have been published. Furthermore, there are six published meta-analyses. The meta-analyses showed that CTG + ST-analysis reduced the risks of vaginal operative delivery by about 10% and fetal blood sampling by 40%. There are conflicting results regarding the effect on metabolic acidosis, much because of controveries about which RCTs should be included in a meta-analysis, and because of differences in methodology, execution and quality of the meta-analyses. Several cohort studies have been published, some showing significant decrease of metabolic acidosis after the introduction of ST-analysis. In this review, we discuss not only the scientific evidence from the RCTs and meta-analyses, but also the limitations of these studies. In conclusion, ST-analysis is effective in reducing operative vaginal deliveries and fetal blood sampling but the effect on neonatal metabolic acidosis is still under debate. Further research is needed to determine the place of ST-analysis in the labor ward for daily practice. PMID:26206514

  13. Scalable Ensemble Learning and Computationally Efficient Variance Estimation

    OpenAIRE

    LeDell, Erin

    2015-01-01

    Ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm is an ensemble method that has been theoretically proven to represent an asymptotically optimal system for learning. The Super Learner, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although...

  14. Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    OpenAIRE

    Miyoshi, Seiji; Okada, Masato

    2006-01-01

    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear mod...

  15. Low energy level spacing distribution in the atomic table ensemble

    International Nuclear Information System (INIS)

    We have analysed the nearest neighbour spacing distributions for the atomic table ensemble. The analysis carried out indicates that the random matrix theory arguments extend even to the ground state domain of atoms. (orig.)

  16. Representative Ensembles in Statistical Mechanics

    OpenAIRE

    V. I. YUKALOV

    2007-01-01

    The notion of representative statistical ensembles, correctly representing statistical systems, is strictly formulated. This notion allows for a proper description of statistical systems, avoiding inconsistencies in theory. As an illustration, a Bose-condensed system is considered. It is shown that a self-consistent treatment of the latter, using a representative ensemble, always yields a conserving and gapless theory.

  17. PSO-Ensemble Demo Application

    DEFF Research Database (Denmark)

    2004-01-01

    Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output is...

  18. Ensemble of Causal Trees

    International Nuclear Information System (INIS)

    We discuss the geometry of trees endowed with a causal structure using the conventional framework of equilibrium statistical mechanics. We show how this ensemble is related to popular growing network models. In particular we demonstrate that on a class of afine attachment kernels the two models are identical but they can differ substantially for other choice of weights. We show that causal trees exhibit condensation even for asymptotically linear kernels. We derive general formulae describing the degree distribution, the ancestor--descendant correlation and the probability that a randomly chosen node lives at a given geodesic distance from the root. It is shown that the Hausdorff dimension dH of the causal networks is generically infinite. (author)

  19. Ensemble of Causal Trees

    Science.gov (United States)

    Bialas, Piotr

    2003-10-01

    We discuss the geometry of trees endowed with a causal structure using the conventional framework of equilibrium statistical mechanics. We show how this ensemble is related to popular growing network models. In particular we demonstrate that on a class of afine attachment kernels the two models are identical but they can differ substantially for other choice of weights. We show that causal trees exhibit condensation even for asymptotically linear kernels. We derive general formulae describing the degree distribution, the ancestor--descendant correlation and the probability that a randomly chosen node lives at a given geodesic distance from the root. It is shown that the Hausdorff dimension dH of the causal networks is generically infinite.

  20. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  1. A Combined Metabolomic and Proteomic Analysis of Gestational Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Joanna Hajduk

    2015-12-01

    Full Text Available The aim of this pilot study was to apply a novel combined metabolomic and proteomic approach in analysis of gestational diabetes mellitus. The investigation was performed with plasma samples derived from pregnant women with diagnosed gestational diabetes mellitus (n = 18 and a matched control group (n = 13. The mass spectrometry-based analyses allowed to determine 42 free amino acids and low molecular-weight peptide profiles. Different expressions of several peptides and altered amino acid profiles were observed in the analyzed groups. The combination of proteomic and metabolomic data allowed obtaining the model with a high discriminatory power, where amino acids ethanolamine, l-citrulline, l-asparagine, and peptide ions with m/z 1488.59; 4111.89 and 2913.15 had the highest contribution to the model. The sensitivity (94.44% and specificity (84.62%, as well as the total group membership classification value (90.32% calculated from the post hoc classification matrix of a joint model were the highest when compared with a single analysis of either amino acid levels or peptide ion intensities. The obtained results indicated a high potential of integration of proteomic and metabolomics analysis regardless the sample size. This promising approach together with clinical evaluation of the subjects can also be used in the study of other diseases.

  2. The bivariate combined model for spatial data analysis.

    Science.gov (United States)

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Faes, Christel

    2016-08-15

    To describe the spatial distribution of diseases, a number of methods have been proposed to model relative risks within areas. Most models use Bayesian hierarchical methods, in which one models both spatially structured and unstructured extra-Poisson variance present in the data. For modelling a single disease, the conditional autoregressive (CAR) convolution model has been very popular. More recently, a combined model was proposed that 'combines' ideas from the CAR convolution model and the well-known Poisson-gamma model. The combined model was shown to be a good alternative to the CAR convolution model when there was a large amount of uncorrelated extra-variance in the data. Less solutions exist for modelling two diseases simultaneously or modelling a disease in two sub-populations simultaneously. Furthermore, existing models are typically based on the CAR convolution model. In this paper, a bivariate version of the combined model is proposed in which the unstructured heterogeneity term is split up into terms that are shared and terms that are specific to the disease or subpopulation, while spatial dependency is introduced via a univariate or multivariate Markov random field. The proposed method is illustrated by analysis of disease data in Georgia (USA) and Limburg (Belgium) and in a simulation study. We conclude that the bivariate combined model constitutes an interesting model when two diseases are possibly correlated. As the choice of the preferred model differs between data sets, we suggest to use the new and existing modelling approaches together and to choose the best model via goodness-of-fit statistics. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26928309

  3. Technical and financial analysis of combined cycle gas turbine

    Directory of Open Access Journals (Sweden)

    Khan Arshad Muhammad

    2013-01-01

    Full Text Available This paper presents technical and financial models which were developed in this study to predict the overall performance of combined cycle gas turbine plant in line with the needs of independent power producers in the liberalized market of power sector. Three similar sizes of combined cycle gas turbine power projects up to 200 Megawatt of independent power producers in Pakistan were selected in-order to develop and drive the basic assumptions for the inputs of the models in view of prevailing Government of Pakistan’s two components of electricity purchasing tariff that is energy purchase price and capacity purchase price at higher voltage grid station terminal from independent power producers. The levelized electricity purchasing tariff over life of plant on gaseous fuel at 60 percent plant load factor was 6.47 cent per kilowatt hour with energy purchase price and capacity purchase prices of 3.54 and 2.93 cents per kilowatt hour respectively. The outcome of technical models of gas turbine, steam turbine and combined cycle gas turbine power were found in close agreement with the projects under consideration and provides opportunity of evaluation of technical and financial aspects of combined cycle power plant in a more simplified manner with relatively accurate results. At 105 Celsius exit temperature of heat recovery steam generator flue gases the net efficiency of combined cycle gas turbine was 48.8 percent whereas at 125 Celsius exit temperature of heat recovery steam generator flue gases it was 48.0 percent. Sensitivity analysis of selected influential components of electricity tariff was also carried out.

  4. Combining OLAP and data mining for analysis on trainee dataset

    OpenAIRE

    Borokshinova, Anastasia

    2015-01-01

    The aim of this thesis is to show the possibility of combining two data analyses techniques OLAP and data mining in a certain area. The principal method of achieving the aim will be continuous comparison and check of acquired results using two techniques. A practise dataset on credits provided to physical persons is used for practical application. The data analysis will be performed using Power Pivot MS Excel complement and LISp-Miner system. For work with LISp-System the 4ft Miner procedure ...

  5. Cost-benefit analysis for combined heat and power plant

    International Nuclear Information System (INIS)

    The paper presents a methodology and practical application of Cost-Benefit Analysis for Combined Heat and Power Plant (Cogeneration facility). Methodology include up-to-date and real data for cogeneration plant in accordance with the trends ill development of the CHP technology. As a case study a CHP plant that could be built-up in Republic of Macedonia is analyzed. The main economic parameters for project evaluation, such as NPV and IRR are calculated for a number of possible scenarios. The analyze present the economic outputs that could be used as a decision for CHP project acceptance for investment. (Author)

  6. The Split-Apply-Combine Strategy for Data Analysis

    Directory of Open Access Journals (Sweden)

    Hadley Wickham

    2011-04-01

    Full Text Available Many data analysis problems involve the application of a split-apply-combine strategy, where you break up a big problem into manageable pieces, operate on each piece independently and then put all the pieces back together. This insight gives rise to a new R package that allows you to smoothly apply this strategy, without having to worry about the type of structure in which your data is stored.The paper includes two case studies showing how these insights make it easier to work with batting records for veteran baseball players and a large 3d array of spatio-temporal ozone measurements.

  7. Combination of emanation thermal analysis with evolved gas analysis and differential thermal analysis

    International Nuclear Information System (INIS)

    Interpretation of the results of emanation thermal analysis was obtained by combination with other thermoanalytical methods: a combination of ETA, EGA and DTA used with samples of CaCO3 and Ca(COO)2. H2O is given as an example. The samples were labelled with 228Th, the parent nuclide of 220Rn, the release of which was measured. Into the samples of CaCO3 the parent nuclide was introduced by impregnation, an alcoholic solution of 228Th and 224Rn in radioactive equilibrium being used. The samples of Ca(COO)2.H2O were labelled in the bulk by coprecipitation, 228Th and 224Ra being added to the initial calcium nitrate solution. (T.I.)

  8. Ensemble annealing of complex physical systems

    CERN Document Server

    Habeck, Michael

    2015-01-01

    Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is that they require a temperature schedule. Choosing well-balanced temperature schedules can be tedious and time-consuming. Imbalanced schedules can have a negative impact on the convergence, runtime and success of annealing algorithms. This article outlines a unifying framework, ensemble annealing, that combines ideas from simulated annealing, histogram reweighting and nested sampling with concepts in thermodynamic control. Ensemble annealing simultaneously simulates a physical system and estimates its density of states. The...

  9. 集合预报产品综合分析显示平台关键技术与实现%Research and Implementation of Ensemble Forecast Product Analysis and Display Platform

    Institute of Scientific and Technical Information of China (English)

    于连庆; 李月安; 高嵩; 罗兵

    2015-01-01

    针对集合预报方法在天气预报业务中的应用,开发了具有自主知识产权的集合预报产品综合分析显示平台。以集合预报模式输出数据量大、气象图表显示效率和质量要求高两个主要需求为出发点,采用客户端服务器架构设计。服务器端将原始数据转换为产品数据以提高客户端执行效率。该文详细分析了平台关键技术,针对数据延时问题,轮询式数据处理技术实时检查原始数据变化状态并更新产品,采用生产者消费者互斥方法解决多线程锁死问题。为提高图表美观程度,动态页面布局显示技术对所有图形要素进行分类,并给出显示属性的抽象描述,结合图形渲染技术,实现了看图模式和出图模式的动态切换。该平台为预报员和服务决策者提供了宝贵的不确定性信息,在中小尺度极端天气预报、台风路径预报中发挥了重要作用。%In response to the impendent requirement of ensemble forecast applications in modern weather forecast operations,an ensemble forecast product analysis and display platform named NUMBERS (NUmerical Model Blending and Ensemble foRecast System)is developed.The application background,requirement analysis,design of system architecture and function implementation are discussed in details.In addition, some key technologies such as dynamic page layout rendering and data pooling,are also described. First of all,the ensemble forecast platform is designed using the client-server architecture.On the server side,there is a data processing program that converts large amounts of ensemble numerical model output into product data to ensure the performance of client data visualization program.On the client side, there is a data visualization program and a management console program.The data visualization program provides features including ensemble product data analysis,blending of multiple deterministic models

  10. Towards Advanced Data Analysis by Combining Soft Computing and Statistics

    CERN Document Server

    Gil, María; Sousa, João; Verleysen, Michel

    2013-01-01

    Soft computing, as an engineering science, and statistics, as a classical branch of mathematics, emphasize different aspects of data analysis. Soft computing focuses on obtaining working solutions quickly, accepting approximations and unconventional approaches. Its strength lies in its flexibility to create models that suit the needs arising in applications. In addition, it emphasizes the need for intuitive and interpretable models, which are tolerant to imprecision and uncertainty. Statistics is more rigorous and focuses on establishing objective conclusions based on experimental data by analyzing the possible situations and their (relative) likelihood. It emphasizes the need for mathematical methods and tools to assess solutions and guarantee performance. Combining the two fields enhances the robustness and generalizability of data analysis methods, while preserving the flexibility to solve real-world problems efficiently and intuitively.

  11. Exergoeconomical analysis of coal gasification combined cycle power plants

    International Nuclear Information System (INIS)

    This paper reports on combined cycle power plants with integrated coal gasification for a better utilization of primary energy sources which gained more and more importance. The established coal gasification technology offers various possibilities e.g. the TEXACO or the PRENFLO method. Recommendation for processes with these gasification methods will be evaluated energetically and exergetically. The pure thermodynamical analysis is at a considerable disadvantage in that the economical consequences of certain process improvement measures are not subjected to investigation. The connection of the exergetical with the economical evaluation will be realized in a way suggested as exergoeconomical analysis. This consideration of the reciprocal influencing of the exergy destruction and the capital depending costs is resulting in an optimization of the process and a minimization of the product costs

  12. Modelling irrigated maize with a combination of coupled-model simulation and uncertainty analysis, in the northwest of China

    Directory of Open Access Journals (Sweden)

    Y. Li

    2012-05-01

    Full Text Available The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500–600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.

  13. Modelling irrigated maize with a combination of coupled-model simulation and uncertainty analysis, in the northwest of China

    Science.gov (United States)

    Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.

    2012-05-01

    The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.

  14. Construction of High-accuracy Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Hedieh Sajedi

    2014-04-01

    Full Text Available There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.

  15. Analysis methodology and recent results of the IGS network combination

    Science.gov (United States)

    Ferland, R.; Kouba, J.; Hutchison, D.

    2000-11-01

    A working group of the International GPS Service (IGS) was created to look after Reference Frame (RF) issues and contribute to the densification and improvement of the International Terrestrial Reference Frame (ITRF). One important objective of the Reference Frame Working Group is to generate consistent IGS station coordinates and velocities, Earth Rotation Parameters (ERP) and geocenter estimates along with the appropriate covariance information. These parameters have a direct impact on other IGS products such as the estimation of GPS satellite ephemerides, as well as satellite and station clocks. The information required is available weekly from the Analysis Centers (AC) (cod, emr, esa, gfz, jpl, ngs, sio) and from the Global Network Associate Analysis Centers (GNAAC) (JPL, mit, ncl) using a "Software Independent Exchange Format" (SINEX). The AC are also contributing daily ERPs as part of their weekly submission. The procedure in place simultaneously combines the weekly station coordinates, geocenter and daily ERP estimates. A cumulative solution containing station coordinates and velocity is also updated with each weekly combination. This provides a convenient way to closely monitor the quality of the estimated station coordinates and to have an up to date cumulative solution available at all times. To provide some necessary redundancy, the weekly station coordinates solution is compared against the GNAAC solutions. Each of the 3 GNAAC uses its own software, allowing independent verification of the combination process. The RMS of the coordinate differences in the north, east and up components between the AC/GNAAC and the ITRF97 Reference Frame Stations are 4-10 mm, 5-20 mm and 6-25 mm. The station velocities within continental plates are compared to the NNR-NUVEL1A plate motion model (DeMets et al., 1994). The north, east and up velocity RMS are 2 mm/y, 3 mm/y and 8 mm/y. Note that NNR-NUVEL1A assumes a zero vertical velocity.

  16. Thermoeconomic Analysis of Advanced Solar-Fossil Combined Power Plants

    Directory of Open Access Journals (Sweden)

    Yassine Allani

    2000-12-01

    Full Text Available

    Hybrid solar thermal power plants (with parabolic trough type of solar collectors featuring gas burners and Rankine steam cycles have been successfully demonstrated by California's Solar Electric Generating System (SEGS. This system has been proven to be one of the most efficient and economical schemes to convert solar energy into electricity. Recent technological progress opens interesting prospects for advanced cycle concepts: a the ISCCS (Integrated Solar Combined Cycle System that integrates the parabolic trough into a fossil fired combined cycle, which allows a larger exergy potential of the fuel to be converted. b the HSTS (Hybrid Solar Tower System which uses high concentration optics (via a power tower generator and high temperature air receivers to drive the combined cycle power plant. In the latter case, solar energy is used at a higher exergy level as a heat source of the topping cycle. This paper presents the results of a thermoeconomic investigation of an ISCCS envisaged in Tunisia. The study is realized in two phases. In the first phase, a mixed approach, based on pinch technology principles coupled with a mathematical optimization algorithm, is used to minimize the heat transfer exergy losses in the steam generators, respecting the off design operating conditions of the steam turbine (cone law. In the second phase, an economic analysis based on the Levelized Electricity Cost (LEC approach was carried out for the configurations, which provided the best concepts during the first phase. A comparison of ISCCS with pure fossil fueled plants (CC+GT is reported for the same electrical power load. A sensitivity analysis based on the relative size of the solar field is presented.

    •  This paper was presented at the ECOS'00 Conference in Enschede, July 5-7, 2000

  17. A joint effort to deliver satellite retrieved atmospheric CO2 concentrations for surface flux inversions: the ensemble median algorithm EMMA

    OpenAIRE

    Pfeifer, S.; R. Parker; Oshchepkov, S.; Kikuchi, N.; Heymann, J; Hasekamp, O.; Guerlet, S.; C. W. O'Dell; J. P. Burrows; Butz, A.; Buchwitz, M.; Bril, A.; H. Bovensmann; H. Bösch; Reuter, M.

    2012-01-01

    We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY and GOSAT. The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new dataset. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often

  18. Protein Remote Homology Detection Based on an Ensemble Learning Approach

    Science.gov (United States)

    Chen, Junjie; Liu, Bingquan; Huang, Dong

    2016-01-01

    Protein remote homology detection is one of the central problems in bioinformatics. Although some computational methods have been proposed, the problem is still far from being solved. In this paper, an ensemble classifier for protein remote homology detection, called SVM-Ensemble, was proposed with a weighted voting strategy. SVM-Ensemble combined three basic classifiers based on different feature spaces, including Kmer, ACC, and SC-PseAAC. These features consider the characteristics of proteins from various perspectives, incorporating both the sequence composition and the sequence-order information along the protein sequences. Experimental results on a widely used benchmark dataset showed that the proposed SVM-Ensemble can obviously improve the predictive performance for the protein remote homology detection. Moreover, it achieved the best performance and outperformed other state-of-the-art methods. PMID:27294123

  19. Hybrid Intrusion Detection Using Ensemble of Classification Methods

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2014-01-01

    Full Text Available One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed for homogeneous ensemble classifiers using bagging and heterogeneous ensemble classifiers using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF and Support Vector Machine (SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of real and benchmark data sets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for real and benchmark data sets of intrusion detection. The accuracy of base classifiers is compared with homogeneous and heterogeneous models for data mining problem. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and also heterogeneous models exhibit better results than homogeneous models for real and benchmark data sets of intrusion detection.

  20. The Ensembl gene annotation system.

    Science.gov (United States)

    Aken, Bronwen L; Ayling, Sarah; Barrell, Daniel; Clarke, Laura; Curwen, Valery; Fairley, Susan; Fernandez Banet, Julio; Billis, Konstantinos; García Girón, Carlos; Hourlier, Thibaut; Howe, Kevin; Kähäri, Andreas; Kokocinski, Felix; Martin, Fergal J; Murphy, Daniel N; Nag, Rishi; Ruffier, Magali; Schuster, Michael; Tang, Y Amy; Vogel, Jan-Hinnerk; White, Simon; Zadissa, Amonida; Flicek, Paul; Searle, Stephen M J

    2016-01-01

    The Ensembl gene annotation system has been used to annotate over 70 different vertebrate species across a wide range of genome projects. Furthermore, it generates the automatic alignment-based annotation for the human and mouse GENCODE gene sets. The system is based on the alignment of biological sequences, including cDNAs, proteins and RNA-seq reads, to the target genome in order to construct candidate transcript models. Careful assessment and filtering of these candidate transcripts ultimately leads to the final gene set, which is made available on the Ensembl website. Here, we describe the annotation process in detail.Database URL: http://www.ensembl.org/index.html. PMID:27337980

  1. Combined Analysis and Validation of Earth Rotation Models and Observations

    Science.gov (United States)

    Kutterer, Hansjoerg; Göttl, Franziska; Heiker, Andrea; Kirschner, Stephanie; Schmidt, Michael; Seitz, Florian

    2010-05-01

    Global dynamic processes cause changes in the Earth's rotation, gravity field and geometry. Thus, they can be traced in geodetic observations of these quantities. However, the sensitivity of the various geodetic observation techniques to specific processes in the Earth system differs. More meaningful conclusions with respect to contributions from individual Earth subsystems can be drawn from the combined analysis of highly precise and consistent parameter time series from heterogeneous observation types which carry partially redundant and partially complementary information. For the sake of a coordinated research in this field, the Research Unit FOR 584 "Earth Rotation and Global Dynamic Processes" is funded at present by the German Research Foundation (DFG). It is concerned with the refined and consistent modeling and data analysis. One of the projects (P9) within this Research Unit addresses the combined analysis and validation of Earth rotation models and observations. In P9 three main topics are addressed: (1) the determination and mutual validation of reliable consistent time series for Earth rotation parameters and gravity field coefficients due to the consideration of their physical connection by the Earth's tensor of inertia, (2) the separation of individual Earth rotation excitation mechanisms by merging all available relevant data from recent satellite missions (GRACE, Jason-1, …) and geodetic space techniques (GNSS, SLR, VLBI, …) in a highly consistent way, (3) the estimation of fundamental physical Earth parameters (Love numbers, …) by an inverse model using the improved geodetic observation time series as constraints. Hence, this project provides significant and unique contributions to the field of Earth system science in general; it corresponds with the goals of the Global Geodetic Observing System (GGOS). In this paper project P9 is introduced, the goals are summarized and a status report including a presentation and discussion of intermediate

  2. A past discharge assimilation system for ensemble streamflow forecasts over France - Part 2: Impact on the ensemble streamflow forecasts

    Science.gov (United States)

    Thirel, G.; Martin, E.; Mahfouf, J.-F.; Massart, S.; Ricci, S.; Regimbeau, F.; Habets, F.

    2010-08-01

    The use of ensemble streamflow forecasts is developing in the international flood forecasting services. Ensemble streamflow forecast systems can provide more accurate forecasts and useful information about the uncertainty of the forecasts, thus improving the assessment of risks. Nevertheless, these systems, like all hydrological forecasts, suffer from errors on initialization or on meteorological data, which lead to hydrological prediction errors. This article, which is the second part of a 2-part article, concerns the impacts of initial states, improved by a streamflow assimilation system, on an ensemble streamflow prediction system over France. An assimilation system was implemented to improve the streamflow analysis of the SAFRAN-ISBA-MODCOU (SIM) hydro-meteorological suite, which initializes the ensemble streamflow forecasts at Météo-France. This assimilation system, using the Best Linear Unbiased Estimator (BLUE) and modifying the initial soil moisture states, showed an improvement of the streamflow analysis with low soil moisture increments. The final states of this suite were used to initialize the ensemble streamflow forecasts of Météo-France, which are based on the SIM model and use the European Centre for Medium-range Weather Forecasts (ECMWF) 10-day Ensemble Prediction System (EPS). Two different configurations of the assimilation system were used in this study: the first with the classical SIM model and the second using improved soil physics in ISBA. The effects of the assimilation system on the ensemble streamflow forecasts were assessed for these two configurations, and a comparison was made with the original (i.e. without data assimilation and without the improved physics) ensemble streamflow forecasts. It is shown that the assimilation system improved most of the statistical scores usually computed for the validation of ensemble predictions (RMSE, Brier Skill Score and its decomposition, Ranked Probability Skill Score, False Alarm Rate, etc

  3. A past discharge assimilation system for ensemble streamflow forecasts over France – Part 2: Impact on the ensemble streamflow forecasts

    Directory of Open Access Journals (Sweden)

    G. Thirel

    2010-08-01

    Full Text Available The use of ensemble streamflow forecasts is developing in the international flood forecasting services. Ensemble streamflow forecast systems can provide more accurate forecasts and useful information about the uncertainty of the forecasts, thus improving the assessment of risks. Nevertheless, these systems, like all hydrological forecasts, suffer from errors on initialization or on meteorological data, which lead to hydrological prediction errors. This article, which is the second part of a 2-part article, concerns the impacts of initial states, improved by a streamflow assimilation system, on an ensemble streamflow prediction system over France. An assimilation system was implemented to improve the streamflow analysis of the SAFRAN-ISBA-MODCOU (SIM hydro-meteorological suite, which initializes the ensemble streamflow forecasts at Météo-France. This assimilation system, using the Best Linear Unbiased Estimator (BLUE and modifying the initial soil moisture states, showed an improvement of the streamflow analysis with low soil moisture increments. The final states of this suite were used to initialize the ensemble streamflow forecasts of Météo-France, which are based on the SIM model and use the European Centre for Medium-range Weather Forecasts (ECMWF 10-day Ensemble Prediction System (EPS. Two different configurations of the assimilation system were used in this study: the first with the classical SIM model and the second using improved soil physics in ISBA. The effects of the assimilation system on the ensemble streamflow forecasts were assessed for these two configurations, and a comparison was made with the original (i.e. without data assimilation and without the improved physics ensemble streamflow forecasts. It is shown that the assimilation system improved most of the statistical scores usually computed for the validation of ensemble predictions (RMSE, Brier Skill Score and its decomposition, Ranked Probability Skill Score, False Alarm

  4. Ensemble-based analysis of Front Range severe convection on 6-7 June 2012: Forecast uncertainty and communication of weather information to Front Range decision-makers

    Science.gov (United States)

    Vincente, Vanessa

    The variation of topography in Colorado not only adds to the beauty of its landscape, but also tests our ability to predict warm season severe convection. Deficient radar coverage and limited observations make quantitative precipitation forecasting quite a challenge. Past studies have suggested that greater forecast skill of mesoscale convection initiation and precipitation characteristics are achievable considering an ensemble with explicitly predicted convection compared to one that has parameterized convection. The range of uncertainty and probabilities in these forecasts can help forecasters in their precipitation predictions and communication of weather information to emergency managers (EMs). EMs serve an integral role in informing and protecting communities in anticipation of hazardous weather. An example of such an event occurred on the evening of 6 June 2012, where areas to the lee of the Rocky Mountain Front Range were impacted by flash-flood-producing severe convection that included heavy rain and copious amounts of hail. Despite the discrepancy in the timing, location and evolution of convection, the convection-allowing ensemble forecasts generally outperformed those of the convection-parameterized ensemble in representing the mesoscale processes responsible for the 6-7 June severe convective event. Key features sufficiently reproduced by several of the convection-allowing ensemble members resembled the observations: 1) general location of a convergence boundary east of Denver, 2) convective initiation along the boundary, 3) general location of a weak cold front near the Wyoming/Nebraska border, and 4) cold pools and moist upslope characteristics that contributed to the backbuilding of convection. Members from the convection-parameterized ensemble that failed to reproduce these results displaced the convergence boundary, produced a cold front that moved southeast too quickly, and used the cold front for convective initiation. The convection

  5. Ensemble Clustering using Semidefinite Programming with Applications.

    Science.gov (United States)

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2010-05-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain. PMID:21927539

  6. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Science.gov (United States)

    Cavrini, Francesco; Quitadamo, Lucia Rita; Saggio, Giovanni

    2016-01-01

    We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI) based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control. PMID:26819595

  7. Combined analysis of effective Higgs portal dark matter models

    CERN Document Server

    Beniwal, Ankit; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Williams, Anthony

    2015-01-01

    We combine and extend the analyses of effective scalar, vector, Majorana and Dirac fermion Higgs portal models of Dark Matter (DM), in which DM couples to the Standard Model (SM) Higgs boson via an operator of the form $\\mathcal{O}_{\\textrm{DM}}\\, H^\\dagger H$. For the fermion models, we take an admixture of scalar $\\overline{\\psi} \\psi$ and pseudoscalar $\\overline{\\psi} i\\gamma_5 \\psi$ interaction terms. For each model, we apply constraints on the parameter space based on the Planck measured DM relic density and the LHC limits on the Higgs invisible branching ratio. For the first time, we perform a consistent study of the indirect detection prospects for these models based on the WMAP7/Planck observations of the CMB, a combined analysis of 15 dwarf spheroidal galaxies by Fermi-LAT and the upcoming Cherenkov Telescope Array (CTA). We also perform a correct treatment of the momentum-dependent direct search cross-section that arises from the pseudoscalar interaction term in the fermionic DM theories. We find, i...

  8. Microflora analysis of a child with severe combined immune deficiency

    Science.gov (United States)

    Taylor, G. R.; Kropp, K. D.; Molina, T. C.

    1978-01-01

    The paper presents a microflora analysis of a 5-year-old male child with severe combined immune deficiency who was delivered by Caesarean section and continuously maintained in an isolator. Despite precautions, it was found that the child had come in contact with at least 54 different microbial contaminants. While his skin autoflora was similar to that of a reference group of healthy male adults in numbers of different species and the number of viable cells present per square centimeter of surface area, the subject's autoflora differed from the reference group in that significantly fewer anaerobic species were recovered from the patient's mouth and feces. It is suggested that the child's remaining disease free shows that the reported bacteria are noninvasive or that the unaffected components of the child's immune defense mechanisms are important.

  9. Mouse Karyotype Obtained by Combining DAPI Staining with Image Analysis

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this study, mitotic metaphase chromosomes in mouse were identified by a new chromosome fluorescence banding technique combining DAPI staining with image analysis. Clear 4', 6-diamidino-2-phenylindole (DAPI) multiple bands like G-bands could be produced in mouse. The MetaMorph software was then used to generate linescans of pixel intensity for the banded chromosomes from short arm to long arm. These linescans were sufficient not only to identify each individual chromosome but also analyze the physical sites of bands in chromosome. Based on the results, the clear and accurate karyotype of mouse metaphase chromosomes was established. The technique is therefore considered to be a new method for cytological studies of mouse.

  10. Investigation of fish otoliths by combined ion beam analysis

    International Nuclear Information System (INIS)

    Complete text of publication follows. This work was implemented within the framework of the Hungarian Ion beam Physics Platform (http://hipp.atomki.hu/). Otoliths are small structures, 'the ear stones' of a fish, and are used to detect acceleration and orientation. They are composed of a combination of protein matrix and calcium carbonate (CaCO3) forming aragonite micro crystals. They have an annually deposited layered conformation with a microstructure corresponding to the seasonal and daily increments. Trace elements, such as Sr, Zn, Fe etc., are also incorporated into the otolith from the environment and the nutrition. The elemental distribution of the otolith of fresh water fish burbot (Lota lota L.) collected in Hungary was measured with Elastic Recoil Detection Analysis (ERDA), Rutherford backscattering spectrometry (RBS) and Particle Induced X-ray Emission (PIXE) at the Nuclear Microprobe Facility of HAS ATOMKI. The spatial 3D structure of the otolith could be observed with a sub-micrometer resolution. It is confirmed that the aragonite micro-crystals are covered by an organic layer and there are some protein rich regions in the otolith, too. By applying the RBSMAST code developed for RBS on macroscopic structure, it was proven that the orientation of the needle shaped aragonite crystals is considerably different at adjacent locations in the otolith. The organic and inorganic component of the otolith could be set apart in the depth selective hydrogen and calcium maps derived by micro- ERDA and micro-RBS. Similar structural analysis could be done near the surface by combining the C, O and Ca elemental maps determined by micro-PIXE measurements. It was observed that the trace metal Zn is bound to the protein component. Acknowledgements This work was partially supported by the Hungarian OTKA Grant No. T046238 and the EU cofunded Economic Competitiveness Operative Programme (GVOP-3.2.1.-2004-04-0402/3.0)

  11. A CLUE for CLUster Ensembles

    OpenAIRE

    Kurt Hornik

    2005-01-01

    Cluster ensembles are collections of individual solutions to a given clustering problem which are useful or necessary to consider in a wide range of applications. The R package clue provides an extensible computational environment for creating and analyzing cluster ensembles, with basic data structures for representing partitions and hierarchies, and facilities for computing on these, including methods for measuring proximity and obtaining consensus and "secondary" clusterings....

  12. Similarity measures for protein ensembles

    DEFF Research Database (Denmark)

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...

  13. The entropy of network ensembles

    OpenAIRE

    Bianconi, Ginestra

    2008-01-01

    In this paper we generalize the concept of random networks to describe networks with non trivial features by a statistical mechanics approach. This framework is able to describe ensembles of undirected, directed as well as weighted networks. These networks might have not trivial community structure or, in the case of networks embedded in a given space, non trivial distance dependence of the link probability. These ensembles are characterized by their entropy which evaluate the cardinality of ...

  14. Quantum Gibbs ensemble Monte Carlo

    International Nuclear Information System (INIS)

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions

  15. Deformed Ginibre ensembles and integrable systems

    Energy Technology Data Exchange (ETDEWEB)

    Orlov, A.Yu., E-mail: orlovs@ocean.ru

    2014-01-17

    We consider three Ginibre ensembles (real, complex and quaternion-real) with deformed measures and relate them to known integrable systems by presenting partition functions of these ensembles in form of fermionic expectation values. We also introduce double deformed Dyson–Wigner ensembles and compare their fermionic representations with those of Ginibre ensembles.

  16. Thermodynamic analysis of a coal gasification and split Rankine combined cogeneration plant. Part 2: exergy analysis

    Energy Technology Data Exchange (ETDEWEB)

    De, S.; Biswal, S.K.

    2005-03-15

    In continuation of the energy analysis in Part 1 of this paper, an exergy analysis of the conceptualized advanced combined cogeneration plant is discussed in this part of the paper. This exergy analysis at the component level identifies the major sources of destruction of work potential. A parametric study has been carried out, for the same design and operating parameters as in Part 1, to explore the second-law performance of components of the plant against variations in these parameters. (Author)

  17. Thermodynamic analysis of a coal gasification and split Rankine combined cogeneration plant. Part 2: exergy analysis

    Energy Technology Data Exchange (ETDEWEB)

    De, S.; Biswal, S.K. [University of Jadavpur, Calcutta (India). Dept. of Engineering Mechanics

    2005-05-01

    In continuation of the energy analysis in Part 1 of this paper, an exergy analysis of the conceptualized advanced combined cogeneration plant is discussed in this part of the paper. This exergy analysis at the component level identifies the major sources of destruction of work potential. A parametric study has been carried out, for the same design and operating parameters as in Part 1, to explore the second-law performance of components of the plant against variations in these parameters.

  18. Hierarchical ensemble-based data fusion for structural health monitoring

    International Nuclear Information System (INIS)

    In structural health monitoring, damage detection results always have uncertainty because of three factors: measurement noise, modeling error and environment changes. Data fusion can lead to the improved accuracy of a classification decision as compared to a decision based on any individual data source alone. Ensemble approaches constitute a relatively new breed of algorithms used for data fusion. In this paper, we introduced a hierarchical ensemble scheme to the data fusion field. The hierarchical ensemble scheme was based on the Dempster–Shafer (DS) theory and the Rotation Forest (RF) method, it was called a hierarchical ensemble because the RF method itself was an ensemble method. The DS theory was used to combine the output of RF based on different data sources. The validation accuracy of the RF model was considered in the improvement of the performance of the hierarchical ensemble. Health monitoring of a small-scale two-story frame structure with different damages subject to shaking table tests was used as an example to validate the efficiency of the proposed scheme. The experimental results indicated that the proposed scheme will improve the identification accuracy and increase the reliability of identification

  19. Evolutionary Ensemble for In Silico Prediction of Ames Test Mutagenicity

    Science.gov (United States)

    Chen, Huanhuan; Yao, Xin

    Driven by new regulations and animal welfare, the need to develop in silico models has increased recently as alternative approaches to safety assessment of chemicals without animal testing. This paper describes a novel machine learning ensemble approach to building an in silico model for the prediction of the Ames test mutagenicity, one of a battery of the most commonly used experimental in vitro and in vivo genotoxicity tests for safety evaluation of chemicals. Evolutionary random neural ensemble with negative correlation learning (ERNE) [1] was developed based on neural networks and evolutionary algorithms. ERNE combines the method of bootstrap sampling on training data with the method of random subspace feature selection to ensure diversity in creating individuals within an initial ensemble. Furthermore, while evolving individuals within the ensemble, it makes use of the negative correlation learning, enabling individual NNs to be trained as accurate as possible while still manage to maintain them as diverse as possible. Therefore, the resulting individuals in the final ensemble are capable of cooperating collectively to achieve better generalization of prediction. The empirical experiment suggest that ERNE is an effective ensemble approach for predicting the Ames test mutagenicity of chemicals.

  20. Localization of atomic ensembles via superfluorescence

    OpenAIRE

    Macovei, M.; Evers, J.; Keitel, C. H.; Zubairy, M. S.

    2006-01-01

    The sub-wavelength localization of an ensemble of atoms concentrated to a small volume in space is investigated. The localization relies on the interaction of the ensemble with a standing wave laser field. The light scattered in the interaction of standing wave field and atom ensemble depends on the position of the ensemble relative to the standing wave nodes. This relation can be described by a fluorescence intensity profile, which depends on the standing wave field parameters, the ensemble ...

  1. Thermodynamic analysis of a coal gasification and split Rankine combined cogeneration plant. Part 1: energy analysis

    Energy Technology Data Exchange (ETDEWEB)

    De, S.; Biswal, S.K. [Jadavpur University, Calcutta (India). Dept. of Mechanical Engineering

    2005-05-01

    The aim of this paper is to study the thermodynamic performance of a new combination of a coal gasification topping gas cycle and an 'externally coupled', 'split Rankine' bottoming steam cycle as a means of advanced clean coal combined cogeneration. Energy analysis of the conceptualized cogeneration scheme is presented in this part of the paper. The effects of the design and operating parameters of both the gas and the steam cycle on the performance of the combined heat and power plant are discussed.

  2. BagMOOV: A novel ensemble for heart disease prediction bootstrap aggregation with multi-objective optimized voting.

    Science.gov (United States)

    Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan

    2015-06-01

    Conventional clinical decision support systems are based on individual classifiers or simple combination of these classifiers which tend to show moderate performance. This research paper presents a novel classifier ensemble framework based on enhanced bagging approach with multi-objective weighted voting scheme for prediction and analysis of heart disease. The proposed model overcomes the limitations of conventional performance by utilizing an ensemble of five heterogeneous classifiers: Naïve Bayes, linear regression, quadratic discriminant analysis, instance based learner and support vector machines. Five different datasets are used for experimentation, evaluation and validation. The datasets are obtained from publicly available data repositories. Effectiveness of the proposed ensemble is investigated by comparison of results with several classifiers. Prediction results of the proposed ensemble model are assessed by ten fold cross validation and ANOVA statistics. The experimental evaluation shows that the proposed framework deals with all type of attributes and achieved high diagnosis accuracy of 84.16 %, 93.29 % sensitivity, 96.70 % specificity, and 82.15 % f-measure. The f-ratio higher than f-critical and p value less than 0.05 for 95 % confidence interval indicate that the results are extremely statistically significant for most of the datasets. PMID:25750025

  3. Reliability analysis of the combined district heating systems

    Science.gov (United States)

    Sharapov, V. I.; Orlov, M. E.; Kunin, M. V.

    2015-12-01

    Technologies that improve the reliability and efficiency of the combined district heating systems in urban areas are considered. The calculation method of reliability of the CHP combined district heating systems is proposed. The comparative estimation of the reliability of traditional and combined district heating systems is performed.

  4. Quantifying Monte Carlo uncertainty in ensemble Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar

    2009-01-15

    This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs

  5. A past discharge assimilation system for ensemble streamflow forecasts over France – Part 2: Impact on the ensemble streamflow forecasts

    Directory of Open Access Journals (Sweden)

    G. Thirel

    2010-04-01

    Full Text Available The use of ensemble streamflow forecasts is developing in the international flood forecasting services. Such systems can provide more accurate forecasts and useful information about the uncertainty of the forecasts, thus improving the assessment of risks. Nevertheless, these systems, like all hydrological forecasts, suffer from errors on initialization or on meteorological data, which lead to hydrological prediction errors. This article, which is the second part of a 2-part article, concerns the impacts of initial states, improved by a streamflow assimilation system, on an ensemble streamflow prediction system over France. An assimilation system was implemented to improve the streamflow analysis of the SAFRAN-ISBA-MODCOU (SIM hydro-meteorological suite, which initializes the ensemble streamflow forecasts at Météo-France. This assimilation system, using the Best Linear Unbiased Estimator (BLUE and modifying the initial soil moisture states, showed an improvement of the streamflow analysis with low soil moisture increments. The final states of this suite were used to initialize the ensemble streamflow forecasts of Météo-France, which are based on the SIM model and use the European Centre for Medium-range Weather Forecasts (ECMWF 10-day Ensemble Prediction System (EPS. Two different configurations of the assimilation system were used in this study: the first with the classical SIM model and the second using improved soil physics in ISBA. The effects of the assimilation system on the ensemble streamflow forecasts were assessed for these two configurations, and a comparison was made with the original (i.e. without data assimilation and without the improved physics ensemble streamflow forecasts. It is shown that the assimilation system improved most of the statistical scores usually computed for the validation of ensemble predictions (RMSE, Brier Skill Score and its decomposition, Ranked Probability Skill Score, False Alarm Rate, etc., especially

  6. Analysis of the Bias on the Beidou GEO Multipath Combinations.

    Science.gov (United States)

    Ning, Yafei; Yuan, Yunbin; Chai, Yanju; Huang, Yong

    2016-01-01

    The Beidou navigation satellite system is a very important sensor for positioning in the Asia-Pacific region. The Beidou inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites have been analysed in some studies previously conducted by other researchers; this paper seeks to gain more insight regarding the geostationary earth orbit (GEO) satellites. Employing correlation analysis, Fourier transformation and wavelet decomposition, we validate whether there is a systematic bias in their multipath combinations. These biases can be observed clearly in satellites C01, C02 and C04 and have a great correlation with time series instead of elevation, being significantly different from those of the Beidou IGSO and MEO satellites. We propose a correction model to mitigate this bias based on its daily periodicity characteristic. After the model has been applied, the performance of the positioning estimations of the eight stations distributed in the Asia-Pacific region is evaluated and compared. The results show that residuals of multipath series behaves random noise; for the single point positioning (SPP) and precise point positioning (PPP) approaches, the positioning accuracy in the upward direction can be improved by 8 cm and 6 mm, respectively, and by 2 cm and 4 mm, respectively, for the horizontal component. PMID:27509503

  7. Combined Thermo-Hydraulic Analysis of a Cryogenic Jet

    CERN Document Server

    Chorowski, M

    1999-01-01

    A cryogenic jet is a phenomenon encountered in different fields like some technological processes and cryosurgery. It may also be a result of cryogenic equipment rupture or a cryogen discharge from the cryostats following resistive transition in superconducting magnets. Heat exchange between a cold jet and a warm steel element (e.g. a buffer tank wall or a transfer line vacuum vessel wall) may result in an excessive localisation of thermal strains and stresses. The objective of the analysis is to get a combined (analytical and experimental) one-dimensional model of a cryogenic jet that will enable estimation of heat transfer intensity between the jet and steel plate with a suitable accuracy for engineering applications. The jet diameter can only be determined experimentally. The mean velocity profile can be calculated from the fact that the total flux of momentum along the jet axis is conserved. The proposed model allows deriving the jet crown area with respect to the distance from the vent and the mean veloc...

  8. A combined approach for comparative exoproteome analysis of Corynebacterium pseudotuberculosis

    Directory of Open Access Journals (Sweden)

    Scrivens James H

    2011-01-01

    Full Text Available Abstract Background Bacterial exported proteins represent key components of the host-pathogen interplay. Hence, we sought to implement a combined approach for characterizing the entire exoproteome of the pathogenic bacterium Corynebacterium pseudotuberculosis, the etiological agent of caseous lymphadenitis (CLA in sheep and goats. Results An optimized protocol of three-phase partitioning (TPP was used to obtain the C. pseudotuberculosis exoproteins, and a newly introduced method of data-independent MS acquisition (LC-MSE was employed for protein identification and label-free quantification. Additionally, the recently developed tool SurfG+ was used for in silico prediction of sub-cellular localization of the identified proteins. In total, 93 different extracellular proteins of C. pseudotuberculosis were identified with high confidence by this strategy; 44 proteins were commonly identified in two different strains, isolated from distinct hosts, then composing a core C. pseudotuberculosis exoproteome. Analysis with the SurfG+ tool showed that more than 75% (70/93 of the identified proteins could be predicted as containing signals for active exportation. Moreover, evidence could be found for probable non-classical export of most of the remaining proteins. Conclusions Comparative analyses of the exoproteomes of two C. pseudotuberculosis strains, in addition to comparison with other experimentally determined corynebacterial exoproteomes, were helpful to gain novel insights into the contribution of the exported proteins in the virulence of this bacterium. The results presented here compose the most comprehensive coverage of the exoproteome of a corynebacterial species so far.

  9. Combined HRTEM and PEELS analysis of nanoporous and amorphous carbon

    International Nuclear Information System (INIS)

    Both the mass density (1.37 kgm/m3) and sp2+sp3 bonding fraction (0.15) were determined for an unusual nanoporous amorphous carbon consisting of curved single graphitic sheets. A combination of high-resolution transmission electron microscopy (HRTEM) and parallel electron energy loss spectroscopy (PEELS) was used. The values of these two parameters provide important constraints for the determination of the structure of this relatively low density variety of nanoporous carbon. The results are relevant also in the search for negatively-curved Schwarzite-related carbon structures. New date are also presented for highly-oriented pyrollytic graphite (HOPG), chemically vapour deposited (CVD) diamond, C60, glassy carbon (GC) and evaporated amorphous carbon (EAC); these are compared with the results for NAC. Kramers-Kronig analysis (KKA) of the low-loss PEELS data shows that the band gaps of both NAC and EAC are collapsed relative to that of CVD diamond. 18 refs., 2 tabs., 3 figs

  10. Estimating preselected and postselected ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)

    2011-11-15

    In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.

  11. Hydrogen adsorption on bimetallic PdAu(111) surface alloys:minimum adsorption ensemble, ligand and ensemble effects, and ensemble confinement

    OpenAIRE

    Takehiro, Naoki; Liu, Ping; Bergbreiter, Andreas; K. Nørskov, Jens; Behm, R. Juergen

    2014-01-01

    The adsorption of hydrogen on structurally well defined PdAu-Pd(111) monolayer surface alloys was investigated in a combined experimental and theoretical study, aiming at a quantitative understanding of the adsorption and desorption properties of individual PdAu nanostructures. Combining the structural information obtained by high resolution scanning tunneling microscopy (STM), in particular on the abundance of specific adsorption ensembles at different Pd surface concentrations, with informa...

  12. Effects of ensembles on methane hydrate nucleation kinetics.

    Science.gov (United States)

    Zhang, Zhengcai; Liu, Chan-Juan; Walsh, Matthew R; Guo, Guang-Jun

    2016-06-21

    By performing molecular dynamics simulations to form a hydrate with a methane nano-bubble in liquid water at 250 K and 50 MPa, we report how different ensembles, such as the NPT, NVT, and NVE ensembles, affect the nucleation kinetics of the methane hydrate. The nucleation trajectories are monitored using the face-saturated incomplete cage analysis (FSICA) and the mutually coordinated guest (MCG) order parameter (OP). The nucleation rate and the critical nucleus are obtained using the mean first-passage time (MFPT) method based on the FS cages and the MCG-1 OPs, respectively. The fitting results of MFPT show that hydrate nucleation and growth are coupled together, consistent with the cage adsorption hypothesis which emphasizes that the cage adsorption of methane is a mechanism for both hydrate nucleation and growth. For the three different ensembles, the hydrate nucleation rate is quantitatively ordered as follows: NPT > NVT > NVE, while the sequence of hydrate crystallinity is exactly reversed. However, the largest size of the critical nucleus appears in the NVT ensemble, rather than in the NVE ensemble. These results are helpful for choosing a suitable ensemble when to study hydrate formation via computer simulations, and emphasize the importance of the order degree of the critical nucleus. PMID:27222203

  13. Ensemble Equivalence for Distinguishable Particles

    Directory of Open Access Journals (Sweden)

    Antonio Fernández-Peralta

    2016-07-01

    Full Text Available Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive.

  14. Evaluating hydrological ensemble predictions using a large and varied set of catchments (Invited)

    Science.gov (United States)

    Ramos, M.; Andreassian, V.; Perrin, C.; Loumagne, C.

    2010-12-01

    It is widely accepted that local and national operational early warning systems can play a key role in mitigating flood damage and losses to society while improving risk awareness and flood preparedness. Over the last years, special attention has been paid to efficiently couple meteorological and hydrological warning systems to track uncertainty and achieve longer lead times in hydrological forecasting. Several national and international scientific programs have focused on the pre-operational test and development of ensemble hydrological forecasting. Based on the lumped soil-moisture-accounting type rainfall-runoff model GRP, developed at Cemagref, we have set up a research tool for ensemble forecasting and conducted several studies to evaluate the quality of streamflow forecasts. The model has been driven by available archives of weather ensemble prediction systems from different sources (Météo-France, ECMWF, TIGGE archive). Our approach has sought to combine overall validation under varied geographical and climate conditions (to assess model robustness and generality) and site-specific validation (to locally accept or reject the hydrologic forecast system and contribute to defining its limits of applicability). The general aim is to contribute to methodological developments concerning a wide range of key aspects in hydrological forecasting, including: the links between predictability skill and catchment characteristics, the magnitude and the distribution of forecasting errors, the analysis of nested or neighbouring catchments for prediction in ungauged basins, as well as the reliability of model predictions when forecasting under conditions not previously encountered during the period of setup and calibration of the system. This presentation will cover the aforementioned topics and present examples from studies carried out to evaluate and inter-compare ensemble forecasting systems using a large and varied set of catchments in France. The specific need to

  15. Comparison and combination of EAKF and SIR-PF in the Bayesian filter framework

    Institute of Scientific and Technical Information of China (English)

    SHEN Zheqi; ZHANG Xiangming; TANG Youmin

    2016-01-01

    Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter (EAKF) and sequential importance resampling particle filter (SIR-PF), using a well-known nonlinear and non-Gaussian model (Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter (EnKF), performs better than the classical (stochastic) EnKF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter (EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic EnKF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.

  16. Combined statistical analysis of landslide release and propagation

    Science.gov (United States)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  17. Ensemble teleportation under suboptimal conditions

    International Nuclear Information System (INIS)

    The possibility of teleportation is certainly the most interesting consequence of quantum non-separability. In the present paper, the feasibility of teleportation is examined on the basis of the rigorous ensemble interpretation of quantum mechanics if non-ideal constraints are imposed on the teleportation scheme. Importance is attached both to the case of noisy Einstein-Podolsky-Rosen (EPR) ensembles and to the conditions under which automatic teleportation is still possible. The success of teleportation is discussed using a new fidelity measure which avoids the weaknesses of previous proposals

  18. Pattern classification using ensemble methods

    CERN Document Server

    Rokach, Lior

    2009-01-01

    Researchers from various disciplines such as pattern recognition, statistics, and machine learning have explored the use of ensemble methodology since the late seventies. Thus, they are faced with a wide variety of methods, given the growing interest in the field. This book aims to impose a degree of order upon this diversity by presenting a coherent and unified repository of ensemble methods, theories, trends, challenges and applications. The book describes in detail the classical methods, as well as the extensions and novel approaches developed recently. Along with algorithmic descriptions o

  19. Ensemble and constrained clustering with applications

    OpenAIRE

    Abdala, D.D. (Daniel)

    2011-01-01

    Diese Arbeit stellt neue Entwicklungen in Ensemble und Constrained Clustering vor und enthält die folgenden wesentlichen Beiträge: 1) Eine Vereinigung von Constrained und Ensemble Clustering in einem einheitlichen Framework. 2) Eine neue Methode zur Messung und Visualisierung der Variabilität von Ensembles. 3) Ein neues, Random Walker basiertes Verfahren für Ensemble Clustering. 4) Anwendung von Ensemble Clustering für Bildsegmentierung. 5) Eine neue Consensus-Funktion für das Ensemble Cluste...

  20. Investigation of predictability during the Extratropical Transition of Tropical Cyclones using the THORPEX Interactive Grand Global Ensemble (TIGGE)

    Science.gov (United States)

    Keller, J. H.; Jones, S. C.; Anwender, D.

    2010-09-01

    Several times per year tropical cyclones pass through a recurvature after their typical tropical life-cycle. They start to interact with the mid-latitude flow and may be transformed into an extratropical system. Such an extratropical transition (ET) process of a tropical cyclone often leads to a reduction in the predictability for the synoptic development of the cyclone itself as well as for the downstream region. Recent studies investigated the predictability during ET events based on the variability among members of single operational medium-range ensemble forecasts. The new THORPEX Interactive Grand Global Ensemble (TIGGE) provides an opportunity to extend these previous studies. TIGGE was established in the world weather research program THORPEX and combines the forecasts of 10 different EPS, operated at weather services all over the world. They are based on different assumptions, initial perturbation methods, resolutions and they also differ in the number of ensemble members, contained in the forecast. Thus TIGGE offers the possibility to compare predictability during ET events in a number of different ensemble prediction systems (EPS). Furthermore, TIGGE may show new possible development scenarios that could not be gained, using one single EPS. To extract the information, contained in the ensemble forecasts, we perform an empirical orthogonal function (EOF) analysis on the variance-covariance-matrix of the forecast field in question. By calculating the principal components we get information, how each member, contained in the ensemble forecast contributes to the obtained EOF distribution. Using a fuzzy clustering, all members that show a related contribution, are grouped together. Thus, this analysis process allows us to extract possible development scenarios out of the ensemble forecast and at the same time we gain information about the possibility of these scenarios. For our investigations eight of the ten TIGGE EPS are used and interpolated to the same

  1. Exergy analysis, parametric analysis and optimization for a novel combined power and ejector refrigeration cycle

    International Nuclear Information System (INIS)

    A new combined power and refrigeration cycle is proposed, which combines the Rankine cycle and the ejector refrigeration cycle. This combined cycle produces both power output and refrigeration output simultaneously. It can be driven by the flue gas of gas turbine or engine, solar energy, geothermal energy and industrial waste heats. An exergy analysis is performed to guide the thermodynamic improvement for this cycle. And a parametric analysis is conducted to evaluate the effects of the key thermodynamic parameters on the performance of the combined cycle. In addition, a parameter optimization is achieved by means of genetic algorithm to reach the maximum exergy efficiency. The results show that the biggest exergy loss due to the irreversibility occurs in heat addition processes, and the ejector causes the next largest exergy loss. It is also shown that the turbine inlet pressure, the turbine back pressure, the condenser temperature and the evaporator temperature have significant effects on the turbine power output, refrigeration output and exergy efficiency of the combined cycle. The optimized exergy efficiency is 27.10% under the given condition.

  2. 3-D visualization of ensemble weather forecasts – Part 1: The visualization tool Met.3D (version 1.0

    Directory of Open Access Journals (Sweden)

    M. Rautenhaus

    2015-02-01

    Full Text Available We present Met.3D, a new open-source tool for the interactive 3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output – 3-D visualization, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantitites. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 campaign.

  3. Sequential Combination Methods forData Clustering Analysis

    Institute of Scientific and Technical Information of China (English)

    钱 涛; Ching Y.Suen; 唐远炎

    2002-01-01

    This paper proposes the use of more than one clustering method to improve clustering performance. Clustering is an optimization procedure based on a specific clustering criterion. Clustering combination can be regardedasatechnique that constructs and processes multiple clusteringcriteria.Sincetheglobalandlocalclusteringcriteriaarecomplementary rather than competitive, combining these two types of clustering criteria may enhance theclustering performance. In our past work, a multi-objective programming based simultaneous clustering combination algorithmhasbeenproposed, which incorporates multiple criteria into an objective function by a weighting method, and solves this problem with constrained nonlinear optimization programming. But this algorithm has high computationalcomplexity.Hereasequential combination approach is investigated, which first uses the global criterion based clustering to produce an initial result, then uses the local criterion based information to improve the initial result with aprobabilisticrelaxation algorithm or linear additive model.Compared with the simultaneous combination method, sequential combination haslow computational complexity. Results on some simulated data and standard test data arereported.Itappearsthatclustering performance improvement can be achieved at low cost through sequential combination.

  4. Learning Outlier Ensembles

    DEFF Research Database (Denmark)

    Micenková, Barbora; McWilliams, Brian; Assent, Ira

    Years of research in unsupervised outlier detection have produced numerous algorithms to score data according to their exceptionality. wever, the nature of outliers heavily depends on the application context and different algorithms are sensitive to outliers of different nature. This makes it very...... existing unsupervised algorithms. In this paper, we show how to use powerful machine learning approaches to combine labeled examples together with arbitrary unsupervised outlier scoring algorithms. We aim to get the best out of the two worlds—supervised and unsupervised. Our approach is also a viable...

  5. Multimodel ensembles of wheat growth

    DEFF Research Database (Denmark)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold;

    2015-01-01

    such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24–38% for...

  6. Spectral Diagonal Ensemble Kalman Filters

    Czech Academy of Sciences Publication Activity Database

    Kasanický, Ivan; Mandel, Jan; Vejmelka, Martin

    2015-01-01

    Roč. 22, č. 4 (2015), s. 485-497. ISSN 1023-5809 R&D Projects: GA ČR GA13-34856S Grant ostatní: NSF(US) DMS -1216481 Institutional support: RVO:67985807 Keywords : data assimilation * ensemble Kalman filter * spectral representation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.987, year: 2014

  7. Spectral Diagonal Ensemble Kalman Filters

    Czech Academy of Sciences Publication Activity Database

    Kasanický, Ivan; Mandel, Jan; Vejmelka, Martin

    2015-01-01

    Roč. 22, č. 4 (2015), s. 485-497. ISSN 1023-5809 R&D Projects: GA ČR GA13-34856S Grant ostatní: NSF(US) DMS-1216481 Institutional support: RVO:67985807 Keywords : data assimilation * ensemble Kalman filter * spectral representation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.987, year: 2014

  8. Technical and financial analysis of combined cycle gas turbine

    OpenAIRE

    Khan Arshad Muhammad

    2013-01-01

    This paper presents technical and financial models which were developed in this study to predict the overall performance of combined cycle gas turbine plant in line with the needs of independent power producers in the liberalized market of power sector. Three similar sizes of combined cycle gas turbine power projects up to 200 Megawatt of independent power producers in Pakistan were selected in-order to develop and drive the basic assumptions for the inputs of the models in view of prev...

  9. Global Ensemble Forecast System (GEFS) [1 Deg.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...

  10. Hybrid ensemble 4DVar assimilation of stratospheric ozone using a global shallow water model

    Science.gov (United States)

    Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.

    2016-07-01

    Wind extraction from stratospheric ozone (O3) assimilation is examined using a hybrid ensemble 4-D variational assimilation (4DVar) shallow water model (SWM) system coupled to the tracer advection equation. Stratospheric radiance observations are simulated using global observations of the SWM fluid height (Z), while O3 observations represent sampling by a typical polar-orbiting satellite. Four ensemble sizes were examined (25, 50, 100, and 1518 members), with the largest ensemble equal to the number of dynamical state variables. The optimal length scale for ensemble localization was found by tuning an ensemble Kalman filter (EnKF). This scale was then used for localizing the ensemble covariances that were blended with conventional covariances in the hybrid 4DVar experiments. Both optimal length scale and optimal blending coefficient increase with ensemble size, with optimal blending coefficients varying from 0.2-0.5 for small ensembles to 0.5-1.0 for large ensembles. The hybrid system outperforms conventional 4DVar for all ensemble sizes, while for large ensembles the hybrid produces similar results to the offline EnKF. Assimilating O3 in addition to Z benefits the winds in the hybrid system, with the fractional improvement in global vector wind increasing from ˜ 35 % with 25 and 50 members to ˜ 50 % with 1518 members. For the smallest ensembles (25 and 50 members), the hybrid 4DVar assimilation improves the zonal wind analysis over conventional 4DVar in the Northern Hemisphere (winter-like) region and also at the Equator, where Z observations alone have difficulty constraining winds due to lack of geostrophy. For larger ensembles (100 and 1518 members), the hybrid system results in both zonal and meridional wind error reductions, relative to 4DVar, across the globe.

  11. Quantum teleportation between remote atomic-ensemble quantum memories

    CERN Document Server

    Bao, Xiao-Hui; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-01-01

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel", quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of 100 million rubidium atoms and connected by a 150-meter optical fiber. The spinwave state of one atomic ensemble is mapped to a propagating photon, and subjected to Bell-state measurements with another single photon that is entangled with the spinwave state of the other ensemble. Two-photon detection events herald the succe...

  12. An ensemble of SVM classifiers based on gene pairs.

    Science.gov (United States)

    Tong, Muchenxuan; Liu, Kun-Hong; Xu, Chungui; Ju, Wenbin

    2013-07-01

    In this paper, a genetic algorithm (GA) based ensemble support vector machine (SVM) classifier built on gene pairs (GA-ESP) is proposed. The SVMs (base classifiers of the ensemble system) are trained on different informative gene pairs. These gene pairs are selected by the top scoring pair (TSP) criterion. Each of these pairs projects the original microarray expression onto a 2-D space. Extensive permutation of gene pairs may reveal more useful information and potentially lead to an ensemble classifier with satisfactory accuracy and interpretability. GA is further applied to select an optimized combination of base classifiers. The effectiveness of the GA-ESP classifier is evaluated on both binary-class and multi-class datasets. PMID:23668348

  13. Enhanced Sampling in the Well-Tempered Ensemble

    Science.gov (United States)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  14. Ensembl Genomes 2013

    DEFF Research Database (Denmark)

    Kersey, Paul Julian; Allen, James E; Christensen, Mikkel;

    2014-01-01

    , and provides a complementary set of resources for non-vertebrate species through a consistent set of programmatic and interactive interfaces. These provide access to data including reference sequence, gene models, transcriptional data, polymorphisms and comparative analysis. This article provides an update...

  15. An adaptive additive inflation scheme for Ensemble Kalman Filters

    Science.gov (United States)

    Sommer, Matthias; Janjic, Tijana

    2016-04-01

    Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.

  16. Ensemble Kalman filtering with residual nudging

    Directory of Open Access Journals (Sweden)

    Xiaodong Luo

    2012-10-01

    Full Text Available Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF by (in effect adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.

  17. Ensemble Kalman filtering with residual nudging

    KAUST Repository

    Luo, X.

    2012-10-03

    Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.

  18. Novel algorithm for constructing support vector machine regression ensemble

    Institute of Scientific and Technical Information of China (English)

    Li Bo; Li Xinjun; Zhao Zhiyan

    2006-01-01

    A novel algorithm for constructing support vector machine regression ensemble is proposed. As to regression prediction, support vector machine regression(SVMR) ensemble is proposed by resampling from given training data sets repeatedly and aggregating several independent SVMRs, each of which is trained to use a replicated training set. After training, several independently trained SVMRs need to be aggregated in an appropriate combination manner. Generally, the linear weighting is usually used like expert weighting score in Boosting Regression and it is without optimization capacity. Three combination techniques are proposed, including simple arithmetic mean,linear least square error weighting and nonlinear hierarchical combining that uses another upper-layer SVMR to combine several lower-layer SVMRs. Finally, simulation experiments demonstrate the accuracy and validity of the presented algorithm.

  19. Enhancing COSMO-DE ensemble forecasts by inexpensive techniques

    Directory of Open Access Journals (Sweden)

    Zied Ben Bouallègue

    2013-02-01

    Full Text Available COSMO-DE-EPS, a convection-permitting ensemble prediction system based on the high-resolution numerical weather prediction model COSMO-DE, is pre-operational since December 2010, providing probabilistic forecasts which cover Germany. This ensemble system comprises 20 members based on variations of the lateral boundary conditions, the physics parameterizations and the initial conditions. In order to increase the sample size in a computationally inexpensive way, COSMO-DE-EPS is combined with alternative ensemble techniques: the neighborhood method and the time-lagged approach. Their impact on the quality of the resulting probabilistic forecasts is assessed. Objective verification is performed over a six months period, scores based on the Brier score and its decomposition are shown for June 2011. The combination of the ensemble system with the alternative approaches improves probabilistic forecasts of precipitation in particular for high precipitation thresholds. Moreover, combining COSMO-DE-EPS with only the time-lagged approach improves the skill of area probabilities for precipitation and does not deteriorate the skill of 2 m-temperature and wind gusts forecasts.

  20. Enhancing COSMO-DE ensemble forecasts by inexpensive techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ben Bouallegue, Zied; Theis, Susanne E.; Gebhardt, Christoph [Deutscher Wetterdienst, Offenbach am Main (Germany)

    2013-02-15

    COSMO-DE-EPS, a convection-permitting ensemble prediction system based on the high-resolution numerical weather prediction model COSMO-DE, is pre-operational since December 2010, providing probabilistic forecasts which cover Germany. This ensemble system comprises 20 members based on variations of the lateral boundary conditions, the physics parameterizations and the initial conditions. In order to increase the sample size in a computationally inexpensive way, COSMO-DE-EPS is combined with alternative ensemble techniques: the neighborhood method and the time-lagged approach. Their impact on the quality of the resulting probabilistic forecasts is assessed. Objective verification is performed over a six months period, scores based on the Brier score and its decomposition are shown for June 2011. The combination of the ensemble system with the alternative approaches improves probabilistic forecasts of precipitation in particular for high precipitation thresholds. Moreover, combining COSMO-DE-EPS with only the time-lagged approach improves the skill of area probabilities for precipitation and does not deteriorate the skill of 2 m-temperature and wind gusts forecasts. (orig.)

  1. Assessment of ENSEMBLES regional climate models for the representation of monthly wind characteristics in the Aegean Sea (Greece): Mean and extremes analysis

    Science.gov (United States)

    Anagnostopoulou, Christina; Tolika, Konstantia; Tegoulias, Ioannis; Velikou, Kondylia; Vagenas, Christos

    2013-04-01

    The main scope of the present study is the assessment of the ability of three of the most updated regional climate models, developed under the frame of the European research project ENSEMBLES (http://www.ensembles-eu.org/), to simulate the wind characteristics in the Aegean Sea in Greece. The examined models are KNMI-RACMO2, MPI-MREMO, and ICTP - RegCM3. They all have the same spatial resolution (25x25km) and for their future projections they are using the A1B SRES emission scenarios. Their simulated wind data (speed and direction) were compared with observational data from several stations over the domain of study for a time period of 25 years, from 1980 to 2004 on a monthly basis. The primer data were available every three or six hours from which we computed the mean daily wind speed and the prevailing daily wind direction. It should be mentioned, that the comparison was made for the grid point that was the closest to each station over land. Moreover, the extreme speed values were also calculated both for the observational and the simulated data, in order to assess the ability of the models in capturing the most intense wind conditions. The first results of the study showed that the prevailing winds during the winter and spring months have a north - northeastern or a south - south western direction in most parts of the Aegean sea. The models under examination seem to capture quite satisfactorily this pattern as well as the general characteristics of the winds in this area. During summer, winds in the Aegean Sea have mainly north direction and the models have quite good agreement both in simulating this direction and the wind speed. Concerning the extreme wind speed (percentiles) it was found that for the stations in the northern Aegean all the models overestimate the extreme wind indices. For the eastern parts of the Aegean the KNMI and the MPI model underestimate the extreme wind speeds while on the other hand the ICTP model overestimates them. Finally for the

  2. The combinative analysis of spraying target image based on chroma

    Science.gov (United States)

    Huang, Jingyao; Zhang, Fajun

    2009-10-01

    Recently, intelligent spray system with vision is a research hotspot due to its application security. This paper propose the design of a novel spraying target extraction system, which is capable of identifying crown of a tree structures that are mainly used in the prevention and treatment of the plant's diseases and insects in the urban tree lawn. But how to differentiate the billboard on the both sides of the streets, especially the green overhead structure billboard, the chroma parameters(three primary colors factor's) of spray-targets, and the character of combination were analyzed by normalization experiment in this paper. In comparative studies, the experiment verified effectively the performance of the chroma combination operation by 2G-R-B processing, and showed this method can effectively strategy that the normalization combination arithmetic preceded the simplification operator for eliminating no spray-target image and divide the crown target effectively from the background.

  3. Study on ETKF-Based Initial Perturbation Scheme for GRAPES Global Ensemble Prediction

    Institute of Scientific and Technical Information of China (English)

    MA Xulin; XUE Jishan; LU Weisong

    2009-01-01

    Initial perturbation scheme is one of the important problems for ensemble prediction. In this paper,ensemble initial perturbation scheme for Global/Regional Assimilation and PrEdiction System (GRAPES)global ensemble prediction is developed in terms of the ensemble transform Kalman filter (ETKF) method.A new GRAPES global ensemble prediction system (GEPS) is also constructed. The spherical simplex 14-member ensemble prediction experiments, using the simulated observation network and error character-lstics of simulated observations and innovation-based inflation, are carried out for about two months. The structure characters and perturbation amplitudes of the ETKF initial perturbations and the perturbation growth characters are analyzed, and their qualities and abilities for the ensemble initial perturbations are given.The preliminary experimental results indicate that the ETKF-based GRAPES ensemble initial perturba- tions could identify main normal structures of analysis error variance and reflect the perturbation amplitudes.The initial perturbations and the spread are reasonable. The initial perturbation variance, which is approx-imately equal to the forecast error variance, is found to respond to changes in the observational spatial variations with simulated observational network density. The perturbations generated through the simplex method are also shown to exhibit a very high degree of consistency between initial analysis and short-range forecast perturbations. The appropriate growth and spread of ensemble perturbations can be maintained up to 96-h lead time. The statistical results for 52-day ensemble forecasts show that the forecast scores of ensemble average for the Northern Hemisphere are higher than that of the control forecast. Provided that using more ensemble members, a real-time observational network and a more appropriate inflation factor,better effects of the ETKF-based initial scheme should be shown.

  4. Analysis of the Nonlinear Trends and Non-Stationary Oscillations of Regional Precipitation in Xinjiang, Northwestern China, Using Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Guo, Bin; Chen, Zhongsheng; Guo, Jinyun; Liu, Feng; Chen, Chuanfa; Liu, Kangli

    2016-03-01

    Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960-2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD). Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years) and inter-decadal scale (quasi-12 and quasi-23 years). Moreover, the 2-3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements. PMID:27007388

  5. Analysis of the Nonlinear Trends and Non-Stationary Oscillations of Regional Precipitation in Xinjiang, Northwestern China, Using Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Bin Guo

    2016-03-01

    Full Text Available Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960–2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD. Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years and inter-decadal scale (quasi-12 and quasi-23 years. Moreover, the 2–3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements.

  6. Analysis of the Nonlinear Trends and Non-Stationary Oscillations of Regional Precipitation in Xinjiang, Northwestern China, Using Ensemble Empirical Mode Decomposition

    Science.gov (United States)

    Guo, Bin; Chen, Zhongsheng; Guo, Jinyun; Liu, Feng; Chen, Chuanfa; Liu, Kangli

    2016-01-01

    Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960–2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD). Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years) and inter-decadal scale (quasi-12 and quasi-23 years). Moreover, the 2–3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements. PMID:27007388

  7. COMBINING ABILITY ANALYSIS OF YIELD COMPONENTS IN CUCUMBER

    Science.gov (United States)

    Three U.S. adapted Cucumis sativus var. sativus L. lines and one C. sativus var. hardwickii (R.) Alef. derived line were mated in a half-diallel design to determine their relative combining for several yield-related traits (yield components). The resulting six F1 progenies were evaluated in a rando...

  8. Repeater Analysis for Combining Information from Different Assessments

    Science.gov (United States)

    Haberman, Shelby; Yao, Lili

    2015-01-01

    Admission decisions frequently rely on multiple assessments. As a consequence, it is important to explore rational approaches to combine the information from different educational tests. For example, U.S. graduate schools usually receive both TOEFL iBT® scores and GRE® General scores of foreign applicants for admission; however, little guidance…

  9. Analysis of induction machines with combined stator windings

    Czech Academy of Sciences Publication Activity Database

    Schreier, Luděk; Bendl, Jiří; Chomát, Miroslav

    2015-01-01

    Roč. 60, č. 2 (2015), s. 155-171. ISSN 0001-7043 R&D Projects: GA ČR GA13-35370S Institutional support: RVO:61388998 Keywords : induction machines * symmetrical components * combined stator winding Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  10. NASA combined pulsed neutron experiment for bulk elemental analysis

    International Nuclear Information System (INIS)

    All the component parts of the prototype Combined Pulsed Neutron Experiment system were completed during 1976 and the components fabricated elsewhere were shipped to Idaho National Engineering Laboratory for integration into the system. The component parts were assembled and tested, and the software was checked out and debugged

  11. Combining Formal Logic and Machine Learning for Sentiment Analysis

    DEFF Research Database (Denmark)

    Petersen, Niklas Christoffer; Villadsen, Jørgen

    2014-01-01

    This paper presents a formal logical method for deep structural analysis of the syntactical properties of texts using machine learning techniques for efficient syntactical tagging. To evaluate the method it is used for entity level sentiment analysis as an alternative to pure machine learning...

  12. The Impact of Mesoscale Environmental Uncertainty on the Prediction of a Tornadic Supercell Storm Using Ensemble Data Assimilation Approach

    Directory of Open Access Journals (Sweden)

    Nusrat Yussouf

    2013-01-01

    Full Text Available Numerical experiments over the past years indicate that incorporating environmental variability is crucial for successful very short-range convective-scale forecasts. To explore the impact of model physics on the creation of environmental variability and its uncertainty, combined mesoscale-convective scale data assimilation experiments are conducted for a tornadic supercell storm. Two 36-member WRF-ARW model-based mesoscale EAKF experiments are conducted to provide background environments using either fixed or multiple physics schemes across the ensemble members. Two 36-member convective-scale ensembles are initialized using background fields from either fixed physics or multiple physics mesoscale ensemble analyses. Radar observations from four operational WSR-88Ds are assimilated into convective-scale ensembles using ARPS model-based 3DVAR system and ensemble forecasts are launched. Results show that the ensemble with background fields from multiple physics ensemble provides more realistic forecasts of significant tornado parameter, dryline structure, and near surface variables than ensemble from fixed physics background fields. The probabilities of strong low-level updraft helicity from multiple physics ensemble correlate better with observed tornado and rotation tracks than probabilities from fixed physics ensemble. This suggests that incorporating physics diversity across the ensemble can be important to successful probabilistic convective-scale forecast of supercell thunderstorms, which is the main goal of NOAA’s Warn-on-Forecast initiative.

  13. Symanzik flow on HISQ ensembles

    CERN Document Server

    Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R

    2013-01-01

    We report on a scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The lattice scale $w_0/a$, originally proposed by the BMW collaboration, is computed using Symanzik flow at four lattice spacings ranging from 0.15 to 0.06 fm. With a Taylor series ansatz, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We give a preliminary determination of the scale $w_0$ in physical units, along with associated systematic errors, and compare with results from other groups. We also present a first estimate of autocorrelation lengths as a function of flowtime for these ensembles.

  14. Manufacturability analysis to combine additive and subtractive processes

    OpenAIRE

    Kerbrat, Olivier; Mognol, Pascal; Hascoët, Jean-Yves

    2010-01-01

    International audience Purpose - The purpose of this paper is to propose a methodology to estimate manufacturing complexity for both machining and layered manufacturing. The goal is to take into account manufacturing constraints at design stage in order to realize tools (dies and molds) by a combination of a subtractive process (high-speed machining) and an additive process (selective laser sintering). Design/methodology/approach - Manufacturability indexes are defined and calculated from ...

  15. State Ensembles and Quantum Entropy

    Science.gov (United States)

    Kak, Subhash

    2016-06-01

    This paper considers quantum communication involving an ensemble of states. Apart from the von Neumann entropy, it considers other measures one of which may be useful in obtaining information about an unknown pure state and another that may be useful in quantum games. It is shown that under certain conditions in a two-party quantum game, the receiver of the states can increase the entropy by adding another pure state.

  16. Simple Deep Random Model Ensemble

    OpenAIRE

    ZHANG, XIAO-LEI; Wu, Ji

    2013-01-01

    Representation learning and unsupervised learning are two central topics of machine learning and signal processing. Deep learning is one of the most effective unsupervised representation learning approach. The main contributions of this paper to the topics are as follows. (i) We propose to view the representative deep learning approaches as special cases of the knowledge reuse framework of clustering ensemble. (ii) We propose to view sparse coding when used as a feature encoder as the consens...

  17. Random matrix ensembles with column/row constraints: I

    International Nuclear Information System (INIS)

    We analyze statistical properties of a complex system subjected to conditions which manifests through specific constraints on the column/row sum of the matrix elements of its Hermitian operators. The presence of additional constraints besides real-symmetric nature leads to new correlations among their eigenfunctions, hinders a complete delocalization of dynamics and affects the eigenvalues too. The statistical analysis of the latter indicates the presence of a new universality class analogous to that of a special type of Brownian ensemble appearing between Poisson and Gaussian orthogonal ensemble. (paper)

  18. An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon

    2010-07-01

    A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.

  19. Generalized Hypergeometric Ensembles: Statistical Hypothesis Testing in Complex Networks

    CERN Document Server

    Casiraghi, Giona; Scholtes, Ingo; Schweitzer, Frank

    2016-01-01

    Statistical ensembles define probability spaces of all networks consistent with given aggregate statistics and have become instrumental in the analysis of relational data on networked systems. Their numerical and analytical study provides the foundation for the inference of topological patterns, the definition of network-analytic measures, as well as for model selection and statistical hypothesis testing. Contributing to the foundation of these important data science techniques, in this article we introduce generalized hypergeometric ensembles, a framework of analytically tractable statistical ensembles of finite, directed and weighted networks. This framework can be interpreted as a generalization of the classical configuration model, which is commonly used to randomly generate networks with a given degree sequence or distribution. Our generalization rests on the introduction of dyadic link propensities, which capture the degree-corrected tendencies of pairs of nodes to form edges between each other. Studyin...

  20. Photon scattering from strongly driven atomic ensembles

    CERN Document Server

    Jin, Lu-ling; Macovei, Mihai

    2011-01-01

    The second order correlation function for light emitted from a strongly and near-resonantly driven dilute cloud of atoms is discussed. Because of the strong driving, the fluorescence spectrum separates into distinct peaks, for which the spectral properties can be defined individually. It is shown that the second-order correlations for various combinations of photons from different spectral lines exhibit bunching together with super- or sub-Poissonian photon statistics, tunable by the choice of the detector positions. Additionally, a Cauchy-Schwarz inequality is violated for photons emitted from particular spectral bands. The emitted light intensity is proportional to the square of the number of particles, and thus can potentially be intense. Three different averaging procedures to model ensemble disorder are compared.

  1. Ensemble Kalman Filtering without a Model

    Science.gov (United States)

    Hamilton, Franz; Berry, Tyrus; Sauer, Timothy

    2016-01-01

    Methods of data assimilation are established in physical sciences and engineering for the merging of observed data with dynamical models. When the model is nonlinear, methods such as the ensemble Kalman filter have been developed for this purpose. At the other end of the spectrum, when a model is not known, the delay coordinate method introduced by Takens has been used to reconstruct nonlinear dynamics. In this article, we merge these two important lines of research. A model-free filter is introduced based on the filtering equations of Kalman and the data-driven modeling of Takens. This procedure replaces the model with dynamics reconstructed from delay coordinates, while using the Kalman update formulation to reconcile new observations. We find that this combination of approaches results in comparable efficiency to parametric methods in identifying underlying dynamics, and may actually be superior in cases of model error.

  2. Optimal Spatial Prediction Using Ensemble Machine Learning.

    Science.gov (United States)

    Davies, Molly Margaret; van der Laan, Mark J

    2016-05-01

    Spatial prediction is an important problem in many scientific disciplines. Super Learner is an ensemble prediction approach related to stacked generalization that uses cross-validation to search for the optimal predictor amongst all convex combinations of a heterogeneous candidate set. It has been applied to non-spatial data, where theoretical results demonstrate it will perform asymptotically at least as well as the best candidate under consideration. We review these optimality properties and discuss the assumptions required in order for them to hold for spatial prediction problems. We present results of a simulation study confirming Super Learner works well in practice under a variety of sample sizes, sampling designs, and data-generating functions. We also apply Super Learner to a real world dataset. PMID:27130244

  3. Combined photon-neutron radiography for nondestructive analysis of materials

    International Nuclear Information System (INIS)

    Combined photon-neutron radiography was investigated as a nondestructive method to determine the shape and material composition of complex objects. A system consisting of photon and neutron sources in a cone beam configuration and a 2D detector array was modeled using the MCNP5 code. Photon-to-neutron transmission ratios were determined for a car engine using 0.1, 0.5, 2.5 MeV neutrons and 0.2, 0.5, 1 MeV photons. Focusing on inherent difference between neutron and photon interactions with matter, it was possible to classify materials within the scanned object. (author)

  4. A hybrid nudging-ensemble Kalman filter approach to data assimilation. Part I: application in the Lorenz system

    Directory of Open Access Journals (Sweden)

    Lili Lei

    2012-05-01

    Full Text Available A hybrid data assimilation approach combining nudging and the ensemble Kalman filter (EnKF for dynamic analysis and numerical weather prediction is explored here using the non-linear Lorenz three-variable model system with the goal of a smooth, continuous and accurate data assimilation. The hybrid nudging-EnKF (HNEnKF computes the hybrid nudging coefficients from the flow-dependent, time-varying error covariance matrix from the EnKF's ensemble forecasts. It extends the standard diagonal nudging terms to additional off-diagonal statistical correlation terms for greater inter-variable influence of the innovations in the model's predictive equations to assist in the data assimilation process. The HNEnKF promotes a better fit of an analysis to data compared to that achieved by either nudging or incremental analysis update (IAU. When model error is introduced, it produces similar or better root mean square errors compared to the EnKF while minimising the error spikes/discontinuities created by the intermittent EnKF. It provides a continuous data assimilation with better inter-variable consistency and improved temporal smoothness than that of the EnKF. Data assimilation experiments are also compared to the ensemble Kalman smoother (EnKS. The HNEnKF has similar or better temporal smoothness than that of the EnKS, and with much smaller central processing unit (CPU time and data storage requirements.

  5. Bayesian Model Averaging for Ensemble-Based Estimates of Solvation Free Energies

    CERN Document Server

    Gosink, Luke J; Reehl, Sarah M; Whitney, Paul D; Mobley, David L; Baker, Nathan A

    2016-01-01

    This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range methods for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods are based on a set of conceptual assumptions that can affect a method's predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the SAMPL4 blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improv...

  6. Optimally choosing small ensemble members to produce robust climate simulations

    International Nuclear Information System (INIS)

    This study examines the subset climate model ensemble size required to reproduce certain statistical characteristics from a full ensemble. The ensemble characteristics examined are the root mean square error, the ensemble mean and standard deviation. Subset ensembles are created using measures that consider the simulation performance alone or include a measure of simulation independence relative to other ensemble members. It is found that the independence measure is able to identify smaller subset ensembles that retain the desired full ensemble characteristics than either of the performance based measures. It is suggested that model independence be considered when choosing ensemble subsets or creating new ensembles. (letter)

  7. Combining microsimulation and spatial interaction models for retail location analysis

    Science.gov (United States)

    Nakaya, Tomoki; Fotheringham, A. Stewart; Hanaoka, Kazumasa; Clarke, Graham; Ballas, Dimitris; Yano, Keiji

    2007-12-01

    Although the disaggregation of consumers is crucial in understanding the fragmented markets that are dominant in many developed countries, it is not always straightforward to carry out such disaggregation within conventional retail modelling frameworks due to the limitations of data. In particular, consumer grouping based on sampled data is not assured to link with the other statistics that are vital in estimating sampling biases and missing variables in the sampling survey. To overcome this difficulty, we propose a useful combination of spatial interaction modelling and microsimulation approaches for the reliable estimation of retail interactions based on a sample survey of consumer behaviour being linked with other areal statistics. We demonstrate this approach by building an operational retail interaction model to estimate expenditure flows from households to retail stores in a local city in Japan, Kusatsu City.

  8. An estimation of Erinaceidae phylogeny: a combined analysis approach.

    Directory of Open Access Journals (Sweden)

    Kai He

    Full Text Available BACKGROUND: Erinaceidae is a family of small mammals that include the spiny hedgehogs (Erinaceinae and the silky-furred moonrats and gymnures (Galericinae. These animals are widely distributed across Eurasia and Africa, from the tundra to the tropics and the deserts to damp forests. The importance of these animals lies in the fact that they are the oldest known living placental mammals, which are well represented in the fossil record, a rarity fact given their size and vulnerability to destruction during fossilization. Although the Family has been well studied, their phylogenetic relationships remain controversial. To test previous phylogenetic hypotheses, we combined molecular and morphological data sets, including representatives of all the genera. METHODOLOGY AND PRINCIPAL FINDINGS: We included in the analyses 3,218 bp mitochondrial genes, one hundred and thirty-five morphological characters, twenty-two extant erinaceid taxa, and five outgroup taxa. Phylogenetic relationships were reconstructed using both partitioned and combined data sets. As in previous analyses, our results strongly support the monophyly of both subfamilies (Galericinae and Erinaceinae, the Hylomys group (to include Neotetracus and Neohylomys, and a sister-relationship of Atelerix and Erinaceus. As well, we verified that the extremely long branch lengths within the Galericinae are consistent with their fossil records. Not surprisingly, we found significant incongruence between the phylogenetic signals of the genes and the morphological characters, specifically in the case of Hylomys parvus, Mesechinus, and relationships between Hemiechinus and Paraechinus. CONCLUSIONS: Although we discovered new clues to understanding the evolutionary relationships within the Erinaceidae, our results nonetheless, strongly suggest that more robust analyses employing more complete taxon sampling (to include fossils and multiple unlinked genes would greatly enhance our understanding of the

  9. FMRI group analysis combining effect estimates and their variances.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W

    2012-03-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  10. FMRI group analysis combining effect estimates and their variances

    OpenAIRE

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Michael S Beauchamp; Cox, Robert W.

    2011-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an a...

  11. Data assimilation in integrated hydrological modeling using ensemble Kalman filtering: evaluating the effect of ensemble size and localization on filter performance

    Directory of Open Access Journals (Sweden)

    J. Rasmussen

    2015-02-01

    Full Text Available Groundwater head and stream discharge is assimilated using the Ensemble Transform Kalman Filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members, an adaptive localization method is used. The performance of the adaptive localization method is compared to the more common local analysis localization. The relationship between filter performance in terms of hydraulic head and discharge error and the number of ensemble members is investigated for varying numbers and spatial distributions of groundwater head observations and with or without discharge assimilation and parameter estimation. The study shows that (1 more ensemble members are needed when fewer groundwater head observations are assimilated, and (2 assimilating discharge observations and estimating parameters requires a much larger ensemble size than just assimilating groundwater head observations. However, the required ensemble size can be greatly reduced with the use of adaptive localization, which by far outperforms local analysis localization.

  12. Thermodynamic Analysis of Ethanol Dry Reforming: Effect of Combined Parameters

    OpenAIRE

    Ganesh R. Kale; Gaikwad, Tejas M.

    2014-01-01

    The prospect of ethanol dry reforming process to utilize CO2 for conversion to hydrogen, syngas, and carbon nanofilaments using abundantly available biofuel—ethanol, and widely available environmental pollutant CO2 is very enthusiastic. A thermodynamic analysis of ethanol CO2 reforming process is done using Gibbs free energy minimization methodology within the temperature range 300–900°C, 1–10 bar pressure, and CO2 to carbon (in ethanol) ratio (CCER) 1–5. The effect of individual as well as c...

  13. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  14. Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time

    KAUST Repository

    Kelly, D. T B

    2014-09-22

    The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so for small ensemble size. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with respect to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz \\'63 and \\'96 models, together with the incompressible Navier-Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier-Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise.

  15. Ensemble Methods Foundations and Algorithms

    CERN Document Server

    Zhou, Zhi-Hua

    2012-01-01

    An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a

  16. Characterizing RNA ensembles from NMR data with kinematic models

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Pachov, Dimitar V.; Bernauer, Julie;

    2014-01-01

    Functional mechanisms of biomolecules often manifest themselves precisely in transient conformational substates. Researchers have long sought to structurally characterize dynamic processes in non-coding RNA, combining experimental data with computer algorithms. However, adequate exploration of co...... trans-activation response element stem-loop. Ensemble-based interpretations of averaged data can aid in formulating and testing dynamic, motion-based hypotheses of functional mechanisms in RNAs with broad implications for RNA engineering and therapeutic intervention.......Functional mechanisms of biomolecules often manifest themselves precisely in transient conformational substates. Researchers have long sought to structurally characterize dynamic processes in non-coding RNA, combining experimental data with computer algorithms. However, adequate exploration of...... the conformational landscapes of 3D RNA encoded by NMR proton chemical shifts. KGSrna resolves motionally averaged NMR data into structural contributions; when coupled with residual dipolar coupling data, a KGSrna ensemble revealed a previously uncharacterized transient excited state of the HIV-1...

  17. Ensemble learning approaches to predicting complications of blood transfusion.

    Science.gov (United States)

    Murphree, Dennis; Ngufor, Che; Upadhyaya, Sudhindra; Madde, Nagesh; Clifford, Leanne; Kor, Daryl J; Pathak, Jyotishman

    2015-08-01

    Of the 21 million blood components transfused in the United States during 2011, approximately 1 in 414 resulted in complication [1]. Two complications in particular, transfusion-related acute lung injury (TRALI) and transfusion-associated circulatory overload (TACO), are especially concerning. These two alone accounted for 62% of reported transfusion-related fatalities in 2013 [2]. We have previously developed a set of machine learning base models for predicting the likelihood of these adverse reactions, with a goal towards better informing the clinician prior to a transfusion decision. Here we describe recent work incorporating ensemble learning approaches to predicting TACO/TRALI. In particular we describe combining base models via majority voting, stacking of model sets with varying diversity, as well as a resampling/boosting combination algorithm called RUSBoost. We find that while the performance of many models is very good, the ensemble models do not yield significantly better performance in terms of AUC. PMID:26737958

  18. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  19. Combined QCD and electroweak analysis of HERA data

    CERN Document Server

    AUTHOR|(CDS)2075585; Adamczyk, L; Adamus, M; Antonelli, S; Aushev, V; Behnke, O; Behrens, U; Bertolin, A; Bloch, I; Boos, EG; Brock, I; Brook, NH; Brugnera, R; Bruni, A; Bussey, PJ; Caldwell, A; Capua, M; Catterall, CD; Chwastowski, J; Ciborowski, J; Ciesielski, R; Cooper-Sarkar, AM; Corradi, M; Dementiev, RK; Devenish, RCE; Dusini, S; Foster, B; Gach, G; Gallo, E; Garfagnini, A; Geiser, A; Gizhko, A; Gladilin, LK; Golubkov, Yu A; Grzelak, G; Guzik, M; Hain, W; Hochman, D; Hori, R; Ibrahim, ZA; Iga, Y; Ishitsuka, M; Januschek, F; Jomhari, NZ; Kadenko, I; Kananov, S; Karshon, U; Kaur, P; Kisielewska, D; Klanner, R; Klein, U; Korzhavina, IA; Kotański, A; Kötz, U; Kovalchuk, N; Kowalski, H; Krupa, B; Kuprash, O; Kuze, M; Levchenko, BB; Levy, A; Limentani, S; Lisovyi, M; Lobodzinska, E; Löhr, B; Lohrmann, E; Longhin, A; Lontkovskyi, D; Lukina, OYu; Makarenko, I; Malka, J; Mohamad Idris, F; Mohammad Nasir, N; Myronenko, V; Nagano, K; Nobe, T; Nowak, RJ; Onishchuk, Yu; Paul, E; Perlański, W; Pokrovskiy, NS; Przybycien, M; Roloff, P; Ruspa, M; Saxon, DH; Schioppa, M; Schneekloth, U; Schörner-Sadenius, T; Shcheglova, LM; Shevchenko, R; Shkola, O; Shyrma, Yu; Singh, I; Skillicorn, IO; Słomiński, W; Solano, A; Stanco, L; Stefaniuk, N; Stern, A; Stopa, P; Sztuk-Dambietz, J; Tassi, E; Tokushuku, K; Tomaszewska, J; Tsurugai, T; Turcato, M; Turkot, O; Tymieniecka, T; Verbytskyi, A; Wan Abdullah, WAT; Wichmann, K; Wing, M; Yamada, S; Yamazaki, Y; Zakharchuk, N; Żarnecki, AF; Zawiejski, L; Zenaiev, O; Zhautykov, BO; Zotkin, DS; Bhadra, S; Gwenlan, C; Hlushchenko, O; Polini, A; Mastroberardino, A

    2016-01-01

    A simultaneous fit of parton distribution functions (PDFs) and electroweak parameters to HERA data on deep inelastic scattering is presented. The input data are the neutral current and charged current inclusive cross sections which were previously used in the QCD analysis leading to the HERAPDF2.0 PDFs. In addition, the polarisation of the electron beam was taken into account for the ZEUS data recorded between 2004 and 2007. Results on the vector and axial-vector couplings of the Z boson to u- and d-type quarks, on the value of the electroweak mixing angle and the mass of the W boson are presented. The values obtained for the electroweak parameters are in agreement with Standard Model predictions.

  20. Combined QCD and electroweak analysis of HERA data

    Science.gov (United States)

    Abramowicz, H.; Abt, I.; Adamczyk, L.; Adamus, M.; Antonelli, S.; Aushev, V.; Behnke, O.; Behrens, U.; Bertolin, A.; Bhadra, S.; Bloch, I.; Boos, E. G.; Brock, I.; Brook, N. H.; Brugnera, R.; Bruni, A.; Bussey, P. J.; Caldwell, A.; Capua, M.; Catterall, C. D.; Chwastowski, J.; Ciborowski, J.; Ciesielski, R.; Cooper-Sarkar, A. M.; Corradi, M.; Dementiev, R. K.; Devenish, R. C. E.; Dusini, S.; Foster, B.; Gach, G.; Gallo, E.; Garfagnini, A.; Geiser, A.; Gizhko, A.; Gladilin, L. K.; Golubkov, Yu. A.; Grzelak, G.; Guzik, M.; Gwenlan, C.; Hain, W.; Hlushchenko, O.; Hochman, D.; Hori, R.; Ibrahim, Z. A.; Iga, Y.; Ishitsuka, M.; Januschek, F.; Jomhari, N. Z.; Kadenko, I.; Kananov, S.; Karshon, U.; Kaur, P.; Kisielewska, D.; Klanner, R.; Klein, U.; Korzhavina, I. A.; Kotański, A.; Kötz, U.; Kovalchuk, N.; Kowalski, H.; Krupa, B.; Kuprash, O.; Kuze, M.; Levchenko, B. B.; Levy, A.; Limentani, S.; Lisovyi, M.; Lobodzinska, E.; Löhr, B.; Lohrmann, E.; Longhin, A.; Lontkovskyi, D.; Lukina, O. Yu.; Makarenko, I.; Malka, J.; Mastroberardino, A.; Mohamad Idris, F.; Mohammad Nasir, N.; Myronenko, V.; Nagano, K.; Nobe, T.; Nowak, R. J.; Onishchuk, Yu.; Paul, E.; Perlański, W.; Pokrovskiy, N. S.; Polini, A.; Przybycień, M.; Roloff, P.; Ruspa, M.; Saxon, D. H.; Schioppa, M.; Schneekloth, U.; Schörner-Sadenius, T.; Shcheglova, L. M.; Shevchenko, R.; Shkola, O.; Shyrma, Yu.; Singh, I.; Skillicorn, I. O.; Słomiński, W.; Solano, A.; Stanco, L.; Stefaniuk, N.; Stern, A.; Stopa, P.; Sztuk-Dambietz, J.; Tassi, E.; Tokushuku, K.; Tomaszewska, J.; Tsurugai, T.; Turcato, M.; Turkot, O.; Tymieniecka, T.; Verbytskyi, A.; Wan Abdullah, W. A. T.; Wichmann, K.; Wing, M.; Yamada, S.; Yamazaki, Y.; Zakharchuk, N.; Żarnecki, A. F.; Zawiejski, L.; Zenaiev, O.; Zhautykov, B. O.; Zotkin, D. S.; ZEUS Collaboration

    2016-05-01

    A simultaneous fit of parton distribution functions (PDFs) and electroweak parameters to HERA data on deep inelastic scattering is presented. The input data are the neutral current and charged current inclusive cross sections which were previously used in the QCD analysis leading to the HERAPDF2.0 PDFs. In addition, the polarization of the electron beam was taken into account for the ZEUS data recorded between 2004 and 2007. Results on the vector and axial-vector couplings of the Z boson to u - and d -type quarks, on the value of the electroweak mixing angle and the mass of the W boson are presented. The values obtained for the electroweak parameters are in agreement with Standard Model predictions.

  1. Controllability of ensembles of linear dynamical systems

    OpenAIRE

    Schönlein, Michael; Helmke, Uwe

    2015-01-01

    We investigate the task of controlling ensembles of initial and terminal state vectors of parameter-dependent linear systems by applying parameter-independent open loop controls. Necessary, as well as sufficient, conditions for ensemble controllability are established, using tools from complex approximation theory. For real analytic families of linear systems it is shown that ensemble controllability holds only for systems with at most two independent parameters. We apply the results to netwo...

  2. Automatic Genre Classification of Latin Music Using Ensemble of Classifiers

    OpenAIRE

    Silla Jr, Carlos N.; Kaestner, Celso A.A.; Koerich, Alessandro L.

    2006-01-01

    This paper presents a novel approach to the task of automatic music genre classification which is based on ensemble learning. Feature vectors are extracted from three 30-second music segments from the beginning, middle and end of each music piece. Individual classifiers are trained to account for each music segment. During classification, the output provided by each classifier is combined with the aim of improving music genre classification accuracy. Experiments carried out on a dataset conta...

  3. Using ensemble data assimilation to forecast hydrological flumes

    OpenAIRE

    Amour, I.; Mussa, Z.; Bibov, A.; Kauranne, T.

    2013-01-01

    Data assimilation, commonly used in weather forecasting, means combining a mathematical forecast of a target dynamical system with simultaneous measurements from that system in an optimal fashion. We demonstrate the benefits obtainable from data assimilation with a dam break flume simulation in which a shallow-water equation model is complemented with wave meter measurements. Data assimilation is conducted with a Variational Ensemble Kalman Filter (VEnKF) algorithm. The resu...

  4. Learning Meta-Embeddings by Using Ensembles of Embedding Sets

    OpenAIRE

    Yin, Wenpeng; Schütze, Hinrich

    2015-01-01

    Word embeddings -- distributed representations of words -- in deep learning are beneficial for many tasks in natural language processing (NLP). However, different embedding sets vary greatly in quality and characteristics of the captured semantics. Instead of relying on a more advanced algorithm for embedding learning, this paper proposes an ensemble approach of combining different public embedding sets with the aim of learning meta-embeddings. Experiments on word similarity and analogy tasks...

  5. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  6. Preconcentration procedures for phthalate esters combined with chromatographic analysis.

    Science.gov (United States)

    Lv, Xueju; Hao, Yi; Jia, Qiong

    2013-08-01

    Phthalate esters are endocrine disrupters or mutagens. They are widely used as plasticizers and can be usually found in environmental samples, such as food, soil and polluted air. However, it is difficult to directly determine phthalate esters owing to their relatively low concentration and complex matrices. Therefore, preconcentration and separation have become increasingly important. In recent years, many preconcentration methods have been successfully developed and widely used, such as liquid-liquid extraction, dispersive liquid-liquid microextraction and solid-phase extraction. These preconcentration methods for phthalate esters can be applied to various real samples, water, soil, air, food and cosmetics. The aim of this paper is to review recent literature studies (primarily from the last five years) about preconcentration techniques for phthalate esters coupled with chromatographic analysis. The following text describes several preconcentration approaches, including liquid-liquid extraction, dispersive liquid-liquid microextraction, cloud point extraction, solid-phase extraction, solid-phase microextraction and stir bar sorptive extraction. Their advantages and disadvantages are also summarized. PMID:23696389

  7. An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point

    Directory of Open Access Journals (Sweden)

    Satish Dehariya,

    2013-04-01

    Full Text Available The majority voting and accurate prediction ofclassification algorithm in data mining arechallenging task for data classification. For theimprovement of data classification used differentclassifier along with another classifier in a mannerof ensembleprocess. Ensemble process increase theclassification ratio of classification algorithm, nowsuch par diagram of classification algorithm iscalled ensemble classifier. Ensemble learning is atechnique to improve the performance and accuracyof classification and predication of machinelearning algorithm. Many researchers proposed amodel for ensemble classifier for merging adifferent classification algorithm, but theperformance of ensemble algorithm suffered fromproblem of outlier, noise and core pointproblem ofdata from features selection process. In this paperwe combined core, outlier and noise data (COB forfeatures selection process for ensemble model. Theprocess of best feature selection with appropriateclassifier used particle of swarm optimization.

  8. Visualizing ensembles in structural biology.

    Science.gov (United States)

    Melvin, Ryan L; Salsbury, Freddie R

    2016-06-01

    Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures. PMID:27179343

  9. Accounting for three sources of uncertainty in ensemble hydrological forecasting

    Science.gov (United States)

    Thiboult, Antoine; Anctil, François; Boucher, Marie-Amélie

    2016-05-01

    Seeking more accuracy and reliability, the hydrometeorological community has developed several tools to decipher the different sources of uncertainty in relevant modeling processes. Among them, the ensemble Kalman filter (EnKF), multimodel approaches and meteorological ensemble forecasting proved to have the capability to improve upon deterministic hydrological forecast. This study aims to untangle the sources of uncertainty by studying the combination of these tools and assessing their respective contribution to the overall forecast quality. Each of these components is able to capture a certain aspect of the total uncertainty and improve the forecast at different stages in the forecasting process by using different means. Their combination outperforms any of the tools used solely. The EnKF is shown to contribute largely to the ensemble accuracy and dispersion, indicating that the initial conditions uncertainty is dominant. However, it fails to maintain the required dispersion throughout the entire forecast horizon and needs to be supported by a multimodel approach to take into account structural uncertainty. Moreover, the multimodel approach contributes to improving the general forecasting performance and prevents this performance from falling into the model selection pitfall since models differ strongly in their ability. Finally, the use of probabilistic meteorological forcing was found to contribute mostly to long lead time reliability. Particular attention needs to be paid to the combination of the tools, especially in the EnKF tuning to avoid overlapping in error deciphering.

  10. Employing Neocognitron Neural Network Base Ensemble Classifiers To Enhance Efficiency Of Classification In Handwritten Digit Datasets

    Directory of Open Access Journals (Sweden)

    Neera Saxena

    2011-07-01

    Full Text Available This paper presents an ensemble of neo-cognitron neural network base classifiers to enhance the accuracy of the system, along the experimental results. The method offers lesser computational preprocessing in comparison to other ensemble techniques as it ex-preempts feature extraction process before feeding the data into base classifiers. This is achieved by the basic nature of neo-cognitron, it is a multilayer feed-forward neural network. Ensemble of such base classifiers gives class labels for each pattern that in turn is combined to give the final class label for that pattern. The purpose of this paper is not only to exemplify learning behaviour of neo-cognitron as base classifiers, but also to purport better fashion to combine neural network based ensemble classifiers.

  11. Performance comparison of meso-scale ensemble wave forecasting systems for Mediterranean sea states

    Science.gov (United States)

    Pezzutto, Paolo; Saulter, Andrew; Cavaleri, Luigi; Bunney, Christopher; Marcucci, Francesca; Torrisi, Lucio; Sebastianelli, Stefano

    2016-08-01

    This paper compares the performance of two wind and wave short range ensemble forecast systems for the Mediterranean Sea. In particular, it describes a six month verification experiment carried out by the U.K. Met Office and Italian Air Force Meteorological Service, based on their respective systems: the Met Office Global-Regional Ensemble Prediction System and the Nettuno Ensemble Prediction System. The latter is the ensemble version of the operational Nettuno forecast system. Attention is focused on the differences between the two implementations (e.g. grid resolution and initial ensemble members sampling) and their effects on the prediction skill. The cross-verification of the two ensemble systems shows that from a macroscopic point of view the differences cancel out, suggesting similar skill. More in-depth analysis indicates that the Nettuno wave forecast is better resolved but, on average, slightly less reliable than the Met Office product. Assessment of the added value of the ensemble techniques at short range in comparison with the deterministic forecast from Nettuno, reveals that adopting the ensemble approach has small, but substantive, advantages.

  12. Joys of Community Ensemble Playing: The Case of the Happy Roll Elastic Ensemble in Taiwan

    Science.gov (United States)

    Hsieh, Yuan-Mei; Kao, Kai-Chi

    2012-01-01

    The Happy Roll Elastic Ensemble (HREE) is a community music ensemble supported by Tainan Culture Centre in Taiwan. With enjoyment and friendship as its primary goals, it aims to facilitate the joys of ensemble playing and the spirit of social networking. This article highlights the key aspects of HREE's development in its first two years…

  13. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni

    2014-04-26

    Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.

  14. Role of different Pd/Pt ensembles in determining CO chemisorption on Au-based bimetallic alloys: A first-principles study

    Energy Technology Data Exchange (ETDEWEB)

    Ham, Hyung Chul, E-mail: hchahm@kist.re.kr [Department of Chemical Engineering, The University of Texas at Austin, Austin, TX 78712 (United States); Fuel Cell Research Center, Korea Institute of Science and Technology, Seoul (Korea, Republic of); Manogaran, Dhivya [Department of Chemistry and Biochemistry, The University of Texas at Austin, Austin, TX 78712 (United States); Hwang, Gyeong S., E-mail: gshwang@che.utexas.edu [Department of Chemical Engineering, The University of Texas at Austin, Austin, TX 78712 (United States); Han, Jonghee; Kim, Hyoung-Juhn; Nam, Suk Woo; Lim, Tae Hoon [Fuel Cell Research Center, Korea Institute of Science and Technology, Seoul (Korea, Republic of)

    2015-03-30

    Graphical abstract: - Highlights: • Pd ensembles greatly reduce CO adsorption energy as compared to Pt ensembles. • The steeper potential energy surface of CO adsorption in Pd(1 1 1) than in Pt(1 1 1). • Switch of binding site preference in ensembles is key to determining CO adsorption. • Opposite electronic (ligand) effect in Pd and Pt ensemble. - Abstract: Using spin-polarized density functional calculations, we investigate the role of different Pd/Pt ensembles in determining CO chemisorption on Au-based bimetallic alloys through a study of the energetics, charge transfer, geometric and electronic structures of CO on various Pd/Pt ensembles (monomer/dimer/trimer/tetramer). We find that the effect of Pd ensembles on the reduction of CO chemisorption energy is much larger than the Pt ensemble case. In particular, small-sized Pd ensembles like monomer show a substantial reduction of CO chemisorption energy compared to the pure Pd (1 1 1) surface, while there are no significant size and shape effects of Pt ensembles on CO chemisorption energy. This is related to two factors: (1) the steeper potential energy surface (PES) of CO in Pd (1 1 1) than in Pt (1 1 1), indicating that the effect of switch of binding site preference on CO chemisorption energy is much larger in Pd ensembles than in Pt ensembles, and (2) down-shift of d-band in Pd ensembles/up-shift of d-band in Pt ensembles as compared to the corresponding pure Pd (1 1 1)/Pt (1 1 1) surfaces, suggesting more reduced activity of Pd ensembles toward CO adsorption than the Pt ensemble case. We also present the different bonding mechanism of CO on Pd/Pt ensembles by the analysis of orbital resolved density of state.

  15. Role of different Pd/Pt ensembles in determining CO chemisorption on Au-based bimetallic alloys: A first-principles study

    International Nuclear Information System (INIS)

    Graphical abstract: - Highlights: • Pd ensembles greatly reduce CO adsorption energy as compared to Pt ensembles. • The steeper potential energy surface of CO adsorption in Pd(1 1 1) than in Pt(1 1 1). • Switch of binding site preference in ensembles is key to determining CO adsorption. • Opposite electronic (ligand) effect in Pd and Pt ensemble. - Abstract: Using spin-polarized density functional calculations, we investigate the role of different Pd/Pt ensembles in determining CO chemisorption on Au-based bimetallic alloys through a study of the energetics, charge transfer, geometric and electronic structures of CO on various Pd/Pt ensembles (monomer/dimer/trimer/tetramer). We find that the effect of Pd ensembles on the reduction of CO chemisorption energy is much larger than the Pt ensemble case. In particular, small-sized Pd ensembles like monomer show a substantial reduction of CO chemisorption energy compared to the pure Pd (1 1 1) surface, while there are no significant size and shape effects of Pt ensembles on CO chemisorption energy. This is related to two factors: (1) the steeper potential energy surface (PES) of CO in Pd (1 1 1) than in Pt (1 1 1), indicating that the effect of switch of binding site preference on CO chemisorption energy is much larger in Pd ensembles than in Pt ensembles, and (2) down-shift of d-band in Pd ensembles/up-shift of d-band in Pt ensembles as compared to the corresponding pure Pd (1 1 1)/Pt (1 1 1) surfaces, suggesting more reduced activity of Pd ensembles toward CO adsorption than the Pt ensemble case. We also present the different bonding mechanism of CO on Pd/Pt ensembles by the analysis of orbital resolved density of state

  16. Comet Methy-sens and DNMTs transcriptional analysis as a combined approach in epigenotoxicology

    Directory of Open Access Journals (Sweden)

    Alessio Perotti

    2015-05-01

    In conclusion, our data demonstrate that Comet Methy-sens, in combination with the analysis of transcriptional levels of DNA methyl transferases, represents a simple and multifunctional approach to implement biomonitoring studies on epigenotoxicological effects of known and unknown xenobiotics.

  17. Models of fragmentation phenomena based on the symmetric group Sn and combinational analysis

    International Nuclear Information System (INIS)

    Various models for fragmentation phenomena are developed using methods from permutation groups and combinational analysis. The appearance and properties of power laws in these models are discussed. Various exactly soluble cases are studied

  18. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  19. Misusability Weight Measure Using Ensemble Approaches

    Directory of Open Access Journals (Sweden)

    Sridevi Sakhamuri1 V.Ramachandran 2 Dr.Rupa

    2013-09-01

    Full Text Available Assigning a misusability weight to a given dataset is strongly related to the way the data is presented (e.g., tabular data, structured or free text and is domain specific. Therefore, one measure of misusability weight cannot fit all types of data in every domain but it gives a fair idea on how to proceed to handle sensitive data. Previous approaches such as M-score models that consider number of entities, anonymity levels, number of properties and values of properties to estimate misusability value for a data record has better efficiency in deducting record sensitivities. Quality of data, Quantity data, and the distinguishing attributes are vital factors that can influence Mscore. Combined with record ranking and knowledge models prior Approaches used one domain expert for deducting sensitive information. But for better performance and accuracy we propose to use the effect of combining knowledge from several experts (e.g., ensemble of knowledge models. Also we plan to extend the computations of sensitivity level of sensitive attributes to be objectively obtained by using machine learning techniques such as SVM classifier along with expert scoring models. This approach particularly fits the sensitive parameter values to the customer value based on customer activity which is far more efficient compared to face value specification with human involvement. A practical implementation of the proposed system validates our claim.

  20. Three-dimensional visualization of ensemble weather forecasts – Part 1: The visualization tool Met.3D (version 1.0

    Directory of Open Access Journals (Sweden)

    M. Rautenhaus

    2015-07-01

    Full Text Available We present "Met.3D", a new open-source tool for the interactive three-dimensional (3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns; however, it is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output – 3-D visualization, ensemble visualization and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts (ECMWF and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantities. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 (THORPEX – North Atlantic Waveguide and Downstream Impact Experiment campaign.

  1. Informatics for analysis of nuclear experiments: TOUTATIX; Ensemble des moyens informatiques pour l`aide a l`analyse des experiences de physique nucleaire: TOUTATIX

    Energy Technology Data Exchange (ETDEWEB)

    Rabasse, J.F.; Du, S.; Penillault, G.; Tassan-Got, L.; Givort, M. [Services Techniques, Inst. de Physique Nucleaire, Paris-11 Univ., 91 - Orsay (France)

    1999-11-01

    For several years in connection with the migration towards UNIX system, software tools have been developed in the laboratory. They allow the nuclear physicist community to achieve the complete analysis of experimental data. They comply with the requirements imposed by the development of multi-detectors. A special attention has been devoted to ergonomic aspects and configuration possibilities. (authors) 1 fig.

  2. Visualization of uncertainty and ensemble data: Exploration of climate modeling and weather forecast data with integrated ViSUS-CDAT systems

    International Nuclear Information System (INIS)

    Climate scientists and meteorologists are working towards a better understanding of atmospheric conditions and global climate change. To explore the relationships present in numerical predictions of the atmosphere, ensemble datasets are produced that combine time- and spatially-varying simulations generated using multiple numeric models, sampled input conditions, and perturbed parameters. These data sets mitigate as well as describe the uncertainty present in the data by providing insight into the effects of parameter perturbation, sensitivity to initial conditions, and inconsistencies in model outcomes. As such, massive amounts of data are produced, creating challenges both in data analysis and in visualization. This work presents an approach to understanding ensembles by using a collection of statistical descriptors to summarize the data, and displaying these descriptors using variety of visualization techniques which are familiar to domain experts. The resulting techniques are integrated into the ViSUS/Climate Data and Analysis Tools (CDAT) system designed to provide a directly accessible, complex visualization framework to atmospheric researchers.

  3. An adaptively fast ensemble empirical mode decomposition method and its applications to rolling element bearing fault diagnosis

    Science.gov (United States)

    Xue, Xiaoming; Zhou, Jianzhong; Xu, Yanhe; Zhu, Wenlong; Li, Chaoshun

    2015-10-01

    Ensemble empirical mode decomposition (EEMD) represents a significant improvement over the original empirical mode decomposition (EMD) method for eliminating the mode mixing problem. However, the added white noises generate some tough problems including the high computational cost, the determination of the two critical parameters (the amplitude of the added white noise and the number of ensemble trials), and the contamination of the residue noise in the signal reconstruction. To solve these problems, an adaptively fast EEMD (AFEEMD) method combined with complementary EEMD (CEEMD) is proposed in this paper. In the proposed method, the two critical parameters are respectively fixed as 0.01 times standard deviation of the original signal and two ensemble trials. Instead, the upper frequency limit of the added white noise is the key parameter which needs to be prescribed beforehand. Unlike the original EEMD method, only two high-frequency white noises are added to the signal to be investigated with anti-phase in AFEEMD. Furthermore, an index termed relative root-mean-square error is employed for the adaptive selection of the proper upper frequency limit of the added white noises. Simulation test and vibration signals based fault diagnosis of rolling element bearing under different fault types are utilized to demonstrate the feasibility and effectiveness of the proposed method. The analysis results indicate that the AFEEMD method represents a sound improvement over the original EEMD method, and has strong practicability.

  4. Visualization and Nowcasting for Aviation using online verified ensemble weather radar extrapolation.

    Science.gov (United States)

    Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan

    2013-04-01

    Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation

  5. ENCORE: Software for Quantitative Ensemble Comparison.

    Directory of Open Access Journals (Sweden)

    Matteo Tiberti

    2015-10-01

    Full Text Available There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can

  6. Modelling machine ensembles with discrete event dynamical system theory

    Science.gov (United States)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  7. Probability Maps for the Visualization of Assimilation Ensemble Flow Data

    KAUST Repository

    Hollt, Thomas

    2015-05-25

    Ocean forecasts nowadays are created by running ensemble simulations in combination with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. This means that in a time series, after resampling, every member can follow up on any of the members before resampling. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. In this work we present an approach using probability-weighted piecewise particle trajectories to allow such a mapping interactively, instead of tracing quadrillions of individual particles. We achieve interactive rates by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next time step. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates.

  8. Statistical properties of daily ensemble variables in the Chinese stock markets

    CERN Document Server

    Gu, G F; Gu, Gao-Feng; Zhou, Wei-Xing

    2006-01-01

    We study dynamical behavior of the Chinese stock markets by investigating the statistical properties of daily ensemble returns and varieties defined respectively as the mean and the standard deviation of the ensemble daily price returns of a portfolio of stocks traded in China's stock markets on a given day. The distribution of the daily ensemble returns has an exponential form in the center and power-law tails, while the variety distribution is log-Gaussian in the bulk followed by a power-law tail for large varieties. Based on detrended fluctuation analysis, R/S analysis and modified R/S analysis, we find evidence of long memory in the ensemble returns and strong evidence of long memory in the evolution of variety.

  9. Ensemble polarimetric SAR image classification based on contextual sparse representation

    Science.gov (United States)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  10. Examining Combinations of Social Physique Anxiety and Motivation Regulations Using Latent Profile Analysis

    Science.gov (United States)

    Ullrich-French, Sarah; Cox, Anne E.; Cooper, Brittany Rhoades

    2016-01-01

    Previous research has used cluster analysis to examine how social physique anxiety (SPA) combines with motivation in physical education. This study utilized a more advanced analytic approach, latent profile analysis (LPA), to identify profiles of SPA and motivation regulations. Students in grades 9-12 (N = 298) completed questionnaires at two time…

  11. Ensemble Kalman filtering with residual nudging

    CERN Document Server

    Luo, Xiaodong; 10.3402/tellusa.v64i0.17130

    2012-01-01

    Covariance inflation and localization are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/o...

  12. Preliminary Assessment of Tecplot Chorus for Analyzing Ensemble of CTH Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Agelastos, Anthony Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stevenson, Joel O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Attaway, Stephen W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peterson, David

    2015-04-01

    The exploration of large parameter spaces in search of problem solution and uncertainty quantifcation produces very large ensembles of data. Processing ensemble data will continue to require more resources as simulation complexity and HPC platform throughput increase. More tools are needed to help provide rapid insight into these data sets to decrease manual processing time by the analyst and to increase knowledge the data can provide. One such tool is Tecplot Chorus, whose strengths are visualizing ensemble metadata and linked images. This report contains the analysis and conclusions from evaluating Tecplot Chorus with an example problem that is relevant to Sandia National Laboratories.

  13. Causal modeling using network ensemble simulations of genetic and gene expression data predicts genes involved in rheumatoid arthritis.

    Science.gov (United States)

    Xing, Heming; McDonagh, Paul D; Bienkowska, Jadwiga; Cashorali, Tanya; Runge, Karl; Miller, Robert E; Decaprio, Dave; Church, Bruce; Roubenoff, Ronenn; Khalil, Iya G; Carulli, John

    2011-03-01

    Tumor necrosis factor α (TNF-α) is a key regulator of inflammation and rheumatoid arthritis (RA). TNF-α blocker therapies can be very effective for a substantial number of patients, but fail to work in one third of patients who show no or minimal response. It is therefore necessary to discover new molecular intervention points involved in TNF-α blocker treatment of rheumatoid arthritis patients. We describe a data analysis strategy for predicting gene expression measures that are critical for rheumatoid arthritis using a combination of comprehensive genotyping, whole blood gene expression profiles and the component clinical measures of the arthritis Disease Activity Score 28 (DAS28) score. Two separate network ensembles, each comprised of 1024 networks, were built from molecular measures from subjects before and 14 weeks after treatment with TNF-α blocker. The network ensemble built from pre-treated data captures TNF-α dependent mechanistic information, while the ensemble built from data collected under TNF-α blocker treatment captures TNF-α independent mechanisms. In silico simulations of targeted, personalized perturbations of gene expression measures from both network ensembles identify transcripts in three broad categories. Firstly, 22 transcripts are identified to have new roles in modulating the DAS28 score; secondly, there are 6 transcripts that could be alternative targets to TNF-α blocker therapies, including CD86--a component of the signaling axis targeted by Abatacept (CTLA4-Ig), and finally, 59 transcripts that are predicted to modulate the count of tender or swollen joints but not sufficiently enough to have a significant impact on DAS28. PMID:21423713

  14. Causal modeling using network ensemble simulations of genetic and gene expression data predicts genes involved in rheumatoid arthritis.

    Directory of Open Access Journals (Sweden)

    Heming Xing

    2011-03-01

    Full Text Available Tumor necrosis factor α (TNF-α is a key regulator of inflammation and rheumatoid arthritis (RA. TNF-α blocker therapies can be very effective for a substantial number of patients, but fail to work in one third of patients who show no or minimal response. It is therefore necessary to discover new molecular intervention points involved in TNF-α blocker treatment of rheumatoid arthritis patients. We describe a data analysis strategy for predicting gene expression measures that are critical for rheumatoid arthritis using a combination of comprehensive genotyping, whole blood gene expression profiles and the component clinical measures of the arthritis Disease Activity Score 28 (DAS28 score. Two separate network ensembles, each comprised of 1024 networks, were built from molecular measures from subjects before and 14 weeks after treatment with TNF-α blocker. The network ensemble built from pre-treated data captures TNF-α dependent mechanistic information, while the ensemble built from data collected under TNF-α blocker treatment captures TNF-α independent mechanisms. In silico simulations of targeted, personalized perturbations of gene expression measures from both network ensembles identify transcripts in three broad categories. Firstly, 22 transcripts are identified to have new roles in modulating the DAS28 score; secondly, there are 6 transcripts that could be alternative targets to TNF-α blocker therapies, including CD86--a component of the signaling axis targeted by Abatacept (CTLA4-Ig, and finally, 59 transcripts that are predicted to modulate the count of tender or swollen joints but not sufficiently enough to have a significant impact on DAS28.

  15. Causal Modeling Using Network Ensemble Simulations of Genetic and Gene Expression Data Predicts Genes Involved in Rheumatoid Arthritis

    Science.gov (United States)

    Xing, Heming; McDonagh, Paul D.; Bienkowska, Jadwiga; Cashorali, Tanya; Runge, Karl; Miller, Robert E.; DeCaprio, Dave; Church, Bruce; Roubenoff, Ronenn; Khalil, Iya G.; Carulli, John

    2011-01-01

    Tumor necrosis factor α (TNF-α) is a key regulator of inflammation and rheumatoid arthritis (RA). TNF-α blocker therapies can be very effective for a substantial number of patients, but fail to work in one third of patients who show no or minimal response. It is therefore necessary to discover new molecular intervention points involved in TNF-α blocker treatment of rheumatoid arthritis patients. We describe a data analysis strategy for predicting gene expression measures that are critical for rheumatoid arthritis using a combination of comprehensive genotyping, whole blood gene expression profiles and the component clinical measures of the arthritis Disease Activity Score 28 (DAS28) score. Two separate network ensembles, each comprised of 1024 networks, were built from molecular measures from subjects before and 14 weeks after treatment with TNF-α blocker. The network ensemble built from pre-treated data captures TNF-α dependent mechanistic information, while the ensemble built from data collected under TNF-α blocker treatment captures TNF-α independent mechanisms. In silico simulations of targeted, personalized perturbations of gene expression measures from both network ensembles identify transcripts in three broad categories. Firstly, 22 transcripts are identified to have new roles in modulating the DAS28 score; secondly, there are 6 transcripts that could be alternative targets to TNF-α blocker therapies, including CD86 - a component of the signaling axis targeted by Abatacept (CTLA4-Ig), and finally, 59 transcripts that are predicted to modulate the count of tender or swollen joints but not sufficiently enough to have a significant impact on DAS28. PMID:21423713

  16. Addressing preference heterogeneity in public health policy by combining Cluster Analysis and Multi-Criteria Decision Analysis

    DEFF Research Database (Denmark)

    Kaltoft, Mette Kjer; Turner, Robin; Cunich, Michelle;

    2015-01-01

    population in relation to the importance assigned to relevant criteria. It involves combining Cluster Analysis (CA), to generate the subgroup sets of preferences, with Multi-Criteria Decision Analysis (MCDA), to provide the policy framework into which the clustered preferences are entered. We employ three...

  17. Ensemble prediction of floods – catchment non-linearity and forecast probabilities

    Directory of Open Access Journals (Sweden)

    C. Reszler

    2007-07-01

    Full Text Available Quantifying the uncertainty of flood forecasts by ensemble methods is becoming increasingly important for operational purposes. The aim of this paper is to examine how the ensemble distribution of precipitation forecasts propagates in the catchment system, and to interpret the flood forecast probabilities relative to the forecast errors. We use the 622 km2 Kamp catchment in Austria as an example where a comprehensive data set, including a 500 yr and a 1000 yr flood, is available. A spatially-distributed continuous rainfall-runoff model is used along with ensemble and deterministic precipitation forecasts that combine rain gauge data, radar data and the forecast fields of the ALADIN and ECMWF numerical weather prediction models. The analyses indicate that, for long lead times, the variability of the precipitation ensemble is amplified as it propagates through the catchment system as a result of non-linear catchment response. In contrast, for lead times shorter than the catchment lag time (e.g. 12 h and less, the variability of the precipitation ensemble is decreased as the forecasts are mainly controlled by observed upstream runoff and observed precipitation. Assuming that all ensemble members are equally likely, the statistical analyses for five flood events at the Kamp showed that the ensemble spread of the flood forecasts is always narrower than the distribution of the forecast errors. This is because the ensemble forecasts focus on the uncertainty in forecast precipitation as the dominant source of uncertainty, and other sources of uncertainty are not accounted for. However, a number of analyses, including Relative Operating Characteristic diagrams, indicate that the ensemble spread is a useful indicator to assess potential forecast errors for lead times larger than 12 h.

  18. Nonequilibrium representative ensembles for isolated quantum systems

    International Nuclear Information System (INIS)

    An isolated quantum system is considered, prepared in a nonequilibrium initial state. In order to uniquely define the system dynamics, one has to construct a representative statistical ensemble. From the principle of least action it follows that the role of the evolution generator is played by a grand Hamiltonian, but not merely by its energy part. A theorem is proved expressing the commutators of field operators with operator products through variational derivatives of these products. A consequence of this theorem is the equivalence of the variational equations for field operators with the Heisenberg equations for the latter. A finite quantum system cannot equilibrate in the strict sense. But it can tend to a quasi-stationary state characterized by ergodic averages and the appropriate representative ensemble depending on initial conditions. Microcanonical ensemble, arising in the eigenstate thermalization, is just a particular case of representative ensembles. Quasi-stationary representative ensembles are defined by the principle of minimal information. The latter also implies the minimization of an effective thermodynamic potential. -- Highlights: → The evolution of a nonequilibrium isolated quantum system is considered. → The grand Hamiltonian is shown to be the evolution generator. → A theorem is proved connecting operator commutators with variational derivatives. → Quasi-stationary states are described by representative ensembles. → These ensembles, generally, depend on initial conditions.

  19. Near-infrared spectroscopy combined with equidistant combination partial least squares applied to multi-index analysis of corn

    Science.gov (United States)

    Lyu, Ning; Chen, Jiemei; Pan, Tao; Yao, Lijun; Han, Yun; Yu, Jing

    2016-05-01

    Development of small, dedicated, reagentless, and low-cost spectrometer has broad application prospects in large-scale agriculture. An appropriate wavelength selection method is a key, albeit difficult, technical aspect. A novel wavelength selection method, named equidistant combination partial least squares (EC-PLS), was applied for wavenumber selection for near-infrared analysis of crude protein, moisture, and crude fat in corn. Based on the EC-PLS, a model set that includes various models equivalent to the optimal model was proposed to select independent and joint-analyses models. The independent analysis models for crude protein, moisture, and crude fat contained only 16, 12, and 22 wavenumbers, whereas the joint-analyses model for the three indicators contained only 27 wavenumbers. Random validation samples excluded from the modeling process were used to validate the four selected models. For the independent analysis models, the validation root mean square errors (V_SEP), validation correlation coefficients (V_RP), and relative validation root mean square errors (V_RSEP) of prediction were 0.271%, 0.946, and 2.8% for crude protein, 0.275%, 0.936, and 2.6% for moisture, and 0.183%, 0.924, and 4.5% for crude fat, respectively. For the joint-analyses model, the V_SEP, V_RP, and V_RSEP were 0.302%, 0.934, and 3.2% for crude protein, 0.280%, 0.935, and 2.7% for moisture, and 0.228%, 0.910, and 5.6% for crude fat, respectively. The results indicated good validation effects and low complexity. Thus, the established models were simple and efficient. The proposed wavenumber selection method provided also valuable reference for designing small dedicated spectrometer for corn. Moreover, the methodological framework and optimization algorithm are universal, such that they can be applied to other fields.

  20. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    Science.gov (United States)

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  1. A Bayesian Ensemble Regression Framework on the Angry Birds Game

    OpenAIRE

    Tziortziotis, Nikolaos; Papagiannis, Georgios; Blekas, Konstantinos

    2014-01-01

    An ensemble inference mechanism is proposed on the Angry Birds domain. It is based on an efficient tree structure for encoding and representing game screenshots, where it exploits its enhanced modeling capability. This has the advantage to establish an informative feature space and modify the task of game playing to a regression analysis problem. To this direction, we assume that each type of object material and bird pair has its own Bayesian linear regression model. In this way, a multi-mode...

  2. Classification of Subcellular Phenotype Images by Decision Templates for Classifier Ensemble

    Science.gov (United States)

    Zhang, Bailing

    2010-01-01

    Subcellular localization is a key functional characteristic of proteins. An automatic, reliable and efficient prediction system for protein subcellular localization is needed for large-scale genome analysis. The automated cell phenotype image classification problem is an interesting "bioimage informatics" application. It can be used for establishing knowledge of the spatial distribution of proteins within living cells and permits to screen systems for drug discovery or for early diagnosis of a disease. In this paper, three well-known texture feature extraction methods including local binary patterns (LBP), Gabor filtering and Gray Level Coocurrence Matrix (GLCM) have been applied to cell phenotype images and the multiple layer perceptron (MLP) method has been used to classify cell phenotype image. After classification of the extracted features, decision-templates ensemble algorithm (DT) is used to combine base classifiers built on the different feature sets. Different texture feature sets can provide sufficient diversity among base classifiers, which is known as a necessary condition for improvement in ensemble performance. For the HeLa cells, the human classification error rate on this task is of 17% as reported in previous publications. We obtain with our method an error rate of 4.8%.

  3. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Francesco Cavrini

    2016-01-01

    Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.

  4. Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms

    Science.gov (United States)

    Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin

    2013-01-01

    Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.

  5. '3-Dimensional' TEM silicon-device analysis by combining plan-view and FIB sample preparation

    International Nuclear Information System (INIS)

    Cross-sectional transmission electron microscopy (TEM) analysis has become routinely used in semiconductor industry to support failure and yield analysis. Plan-view transmission electron microscopy analysis however is much less frequently performed. In this paper it is illustrated that plan-view transmission electron microscopy analysis can add valuable information in yield analysis studies, especially when crystal defects are involved. '3-Dimensional' information can be obtained by combining cross-sectional transmission electron microscopy analysis with plan-view analysis. If the available material is limited, it can become a difficult choice whether to go for a cross-sectional or a plan-view analysis. Therefore it was explored if a cross-sectional specimen could still be made out of a plan-view specimen, using the plan-view analysis to locate the failure site precisely. This has recently been successfully done using the in-situ lift-out technique in the focused ion beam machine

  6. A variational ensemble scheme for noisy image data assimilation

    Science.gov (United States)

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2014-05-01

    Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ )(Xb - )T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow

  7. Ensemble estimators for multivariate entropy estimation

    CERN Document Server

    Sricharan, Kumar

    2012-01-01

    The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the exponent in the MSE rate of convergence decays increasingly slowly as the dimension $d$ of the samples increases. In particular, the rate is often glacially slow of order $O(T^{-{\\gamma}/{d}})$, where $T$ is the number of samples, and $\\gamma>0$ is a rate parameter. Examples of such estimators include kernel density estimators, $k$-NN density estimators, $k$-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted convex combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of $O(T^{-1})$. Furthermore, we show that these optimal weights can be determined by so...

  8. Derivation of Mayer Series from Canonical Ensemble

    Science.gov (United States)

    Xian-Zhi, Wang

    2016-02-01

    Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.

  9. Generating precipitation ensembles for flood alert and risk management

    Science.gov (United States)

    Caseri, Angelica; Javelle, Pierre; Ramos, Maria-Helena; Leblois, Etienne

    2015-04-01

    Floods represent one of the major natural disasters that are often responsible for fatalities and economic losses. Flood warning systems are needed to anticipate the arrival of severe events and mitigate their impacts. Flood alerts are particularly important for risk management and response in the nowcasting of flash floods. In this case, precipitation fields observed in real time play a crucial role and observational uncertainties must be taken into account. In this study, we investigate the potential of a framework which combines a geostatistical conditional simulation method that considers information from precipitation radar and rain gauges, and a distributed rainfall-runoff model to generate an ensemble of precipitation fields and produce probabilistic flood alert maps. We adapted the simulation method proposed by Leblois and Creutin (2013), based on the Turning Band Method (TBM) and a conditional simulation approach, to consider the temporal and spatial characteristics of radar data and rain gauge measurements altogether and generate precipitation ensembles. The AIGA system developed by Irstea and Météo-France for predicting flash floods in the French Mediterranean region (Javelle et al., 2014) was used to transform the generated precipitation ensembles into ensembles of discharge at the outlet of the studied catchments. Finally, discharge ensembles were translated into maps providing information on the probability of exceeding a given flood threshold. A total of 19 events that occurred between 2009 and 2013 in the Var region (southeastern France), a region prone to flash floods, was used to illustrate the approach. Results show that the proposed method is able to simulate an ensemble of realistic precipitation fields and capture peak flows of flash floods. This was shown to be particularly useful at ungauged catchments, where uncertainties on the evaluation of flood peaks are high. The results obtained also show that the approach developed can be used to

  10. Psychological treatment versus combined treatment of depression: A meta-analysis.

    OpenAIRE

    Cuijpers, P.; Straten, van, A.; Warmerdam, E.H.; Andersson, G.

    2009-01-01

    Background: A large number of studies have shown that psychological treatments have significant effects on depression. Although several studies have examined the relative effects of psychological and combined treatments, this has not been studied satisfactorily in recent statistical meta-analyses. Method: We conducted a meta-analysis of randomized studies in which a psychological treatment was compared to a combined treatment consisting of the same psychological treatment with a pharmacologic...

  11. Nondestructive analysis by combined X-ray tomography on a synchrotron radiation facility

    Institute of Scientific and Technical Information of China (English)

    DENG Biao; YU Xiaohan; LI Aiguo; XU Hongjie

    2007-01-01

    A nondestructive X-ray analysis technique combining transmission tomography, fluorescence tomography and Compton tomography based on synchrotron radiation is described. This novel technique will be an optional experimental technique at SSRF's hard X-ray micro-focusing beamline under construction at present. An experimental result of combined X-ray tomography is obtained in NE-5A station of PF. The reconstructed images of test objects are given.

  12. Atomic clock ensemble in space

    International Nuclear Information System (INIS)

    Atomic Clock Ensemble in Space (ACES) is a mission using high-performance clocks and links to test fundamental laws of physics in space. Operated in the microgravity environment of the International Space Station, the ACES clocks, PHARAO and SHM, will generate a frequency reference reaching instability and inaccuracy at the 1 · 10−16 level. A link in the microwave domain (MWL) and an optical link (ELT) will make the ACES clock signal available to ground laboratories equipped with atomic clocks. Space-to-ground and ground-to-ground comparisons of atomic frequency standards will be used to test Einstein's theory of general relativity including a precision measurement of the gravitational red-shift, a search for time variations of fundamental constants, and Lorentz Invariance tests. Applications in geodesy, optical time transfer, and ranging will also be supported. ACES has now reached an advanced technology maturity, with engineering models completed and successfully tested and flight hardware under development. This paper presents the ACES mission concept and the status of its main instruments.

  13. Cooperative effects of neuronal ensembles.

    Science.gov (United States)

    Rose, G; Siebler, M

    1995-01-01

    Electrophysiological properties of neurons as the basic cellular elements of the central nervous system and their synaptic connections are well characterized down to a molecular level. However, the behavior of complex noisy networks formed by these constituents usually cannot simply be derived from the knowledge of its microscopic parameters. As a consequence, cooperative phenomena based on the interaction of neurons were postulated. This is a report on a study of global network spike activity as a function of synaptic interaction. We performed experiments in dissociated cultured hippocampal neurons and, for comparison, simulations of a mathematical model closely related to electrophysiology. Numeric analyses revealed that at a critical level of synaptic connectivity the firing behavior undergoes a phase transition. This cooperative effect depends crucially on the interaction of numerous cells and cannot be attributed to the spike threshold of individual neurons. In the experiment a drastic increase in the firing level was observed upon increase of synaptic efficacy by lowering of the extracellular magnesium concentration, which is compatible with our theoretical predictions. This "on-off" phenomenon demonstrates that even in small neuronal ensembles collective behavior can emerge which is not explained by the characteristics of single neurons. PMID:8542966

  14. Parametric analysis for a new combined power and ejector-absorption refrigeration cycle

    International Nuclear Information System (INIS)

    A new combined power and ejector-absorption refrigeration cycle is proposed, which combines the Rankine cycle and the ejector-absorption refrigeration cycle, and could produce both power output and refrigeration output simultaneously. This combined cycle, which originates from the cycle proposed by authors previously, introduces an ejector between the rectifier and the condenser, and provides a performance improvement without greatly increasing the complexity of the system. A parametric analysis is conducted to evaluate the effects of the key thermodynamic parameters on the cycle performance. It is shown that heat source temperature, condenser temperature, evaporator temperature, turbine inlet pressure, turbine inlet temperature, and basic solution ammonia concentration have significant effects on the net power output, refrigeration output and exergy efficiency of the combined cycle. It is evident that the ejector can improve the performance of the combined cycle proposed by authors previously.

  15. The SFM/ToF-SIMS combination for advanced chemically-resolved analysis at the nanoscale

    International Nuclear Information System (INIS)

    The combination of Time-of-flight Secondary Ion Mass Spectrometry (ToF-SIMS) and Scanning Force Microscopy (SFM) allows the 3D-compositional analysis of samples or devices. Typically, the topographical data obtained by SFM is used to determine the initial sample topography and the absolute depth of the ToF-SIMS analysis. Here ToF-SIMS and SFM data sets obtained on 2 prototypical samples are explored to go beyond conventional 3D-compositional analysis. SFM topographical and material contrast maps are combined with ToF-SIMS retrospective analysis to detect features that would have escaped a conventional ToF-SIMS data analysis. In addition, SFM data is used to extrapolate the chemical information beyond the spatial resolution of ToF-SIMS, allowing the mapping of the chemical composition at the nanoscale

  16. Direct Correlation of Cell Toxicity to Conformational Ensembles of Genetic Aβ Variants

    DEFF Research Database (Denmark)

    Somavarapu, Arun Kumar; Kepp, Kasper Planeta

    2015-01-01

    We report a systematic analysis of conformational ensembles generated from multiseed molecular dynamics simulations of all 15 known genetic variants of Aβ42. We show that experimentally determined variant toxicities are largely explained by random coil content of the amyloid ensembles (correlatio...

  17. A unified MGF-based capacity analysis of diversity combiners over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-03-01

    Unified exact ergodic capacity results for L-branch coherent diversity combiners including equal-gain combining (EGC) and maximal-ratio combining (MRC) are not known. This paper develops a novel generic framework for the capacity analysis of L-branch EGC/MRC over generalized fading channels. The framework is used to derive new results for the gamma-shadowed generalized Nakagami-m fading model which can be a suitable model for the fading environments encountered by high frequency (60 GHz and above) communications. The mathematical formalism is illustrated with some selected numerical and simulation results confirming the correctness of our newly proposed framework. © 2012 IEEE.

  18. Application of sensitivity analysis in building energy simulations: combining first and second order elementary effects Methods

    CERN Document Server

    Sanchez, David Garcia; Musy, Marjorie; Bourges, Bernard

    2012-01-01

    Sensitivity analysis plays an important role in the understanding of complex models. It helps to identify influence of input parameters in relation to the outputs. It can be also a tool to understand the behavior of the model and then can help in its development stage. This study aims to analyze and illustrate the potential usefulness of combining first and second-order sensitivity analysis, applied to a building energy model (ESP-r). Through the example of a collective building, a sensitivity analysis is performed using the method of elementary effects (also known as Morris method), including an analysis of interactions between the input parameters (second order analysis). Importance of higher-order analysis to better support the results of first order analysis, highlighted especially in such complex model. Several aspects are tackled to implement efficiently the multi-order sensitivity analysis: interval size of the variables, management of non-linearity, usefulness of various outputs.

  19. Method to detect gravitational waves from an ensemble of known pulsars

    CERN Document Server

    Fan, Xilong; Messenger, Christopher

    2016-01-01

    Combining information from weak sources, such as known pulsars, for gravitational wave detection, is an attractive approach to improve detection efficiency. We propose an optimal statistic for a general ensemble of signals and apply it to an ensemble of known pulsars. Our method combines $\\mathcal F$-statistic values from individual pulsars using weights proportional to each pulsar's expected optimal signal-to-noise ratio to improve the detection efficiency. We also point out that to detect at least one pulsar within an ensemble, different thresholds should be designed for each source based on the expected signal strength. The performance of our proposed detection statistic is demonstrated using simulated sources, with the assumption that all pulsars' ellipticities belong to a common (yet unknown) distribution. Comparing with an equal-weight strategy and with individual source approaches, we show that the weighted-combination of all known pulsars, where weights are assigned based on the pulsars' known informa...

  20. The Parallel Data Assimilation Framework PDAF - a flexible software framework for ensemble data assimilation

    OpenAIRE

    Nerger, Lars; Hiller, Wolfgang; Schröter, Jens

    2012-01-01

    Ensemble filter algorithms can be implemented in a generic way such that they can be applied with various models with only a minimum amount of recoding. This is possible due to the fact that ensemble filters can operate on abstract state vectors and require only limited information about the numerical model and the observational data used for a data assimilation application. To build an assimilation system, the analysis step of a filter algorithm needs to be connected to t...

  1. Quantum teleportation between remote atomic-ensemble quantum memories.

    Science.gov (United States)

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-12-11

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222

  2. Quantum teleportation between remote atomic-ensemble quantum memories

    Science.gov (United States)

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-01-01

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222

  3. Cloud-Aerosol-Radiation (CAR ensemble modeling system

    Directory of Open Access Journals (Sweden)

    X.-Z. Liang

    2013-04-01

    Full Text Available A Cloud-Aerosol-Radiation (CAR ensemble modeling system has been developed to incorporate the largest choices of alternative parameterizations for cloud properties (cover, water, radius, optics, geometry, aerosol properties (type, profile, optics, radiation transfers (solar, infrared, and their interactions. These schemes form the most comprehensive collection currently available in the literature, including those used by the world leading general circulation models (GCMs. The CAR provides a unique framework to determine (via intercomparison across all schemes, reduce (via optimized ensemble simulations, and attribute specific key factors for (via physical process sensitivity analyses the model discrepancies and uncertainties in representing greenhouse gas, aerosol and cloud radiative forcing effects. This study presents a general description of the CAR system and illustrates its capabilities for climate modeling applications, especially in the context of estimating climate sensitivity and uncertainty range caused by cloud-aerosol-radiation interactions. For demonstration purpose, the evaluation is based on several CAR standalone and coupled climate model experiments, each comparing a limited subset of the full system ensemble with up to 896 members. It is shown that the quantification of radiative forcings and climate impacts strongly depends on the choices of the cloud, aerosol and radiation schemes. The prevailing schemes used in current GCMs are likely insufficient in variety and physically biased in a significant way. There exists large room for improvement by optimally combining radiation transfer with cloud property schemes.

  4. Gradient flow and scale setting on MILC HISQ ensembles

    CERN Document Server

    Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Komijani, J; Laiho, J; Levkova, L; Sugar, R L; Toussaint, D; Van de Water, R S

    2015-01-01

    We report on a scale determination with gradient-flow techniques on the $N_f=2+1+1$ HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from approximately 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales $\\sqrt{t_0}/a$ and $w_0/a$ and their tree-level improvements, $\\sqrt{t_{0,{\\rm imp}}}$ and $w_{0,{\\rm imp}}$, are computed on each ensemble using Symanzik flow and the cloverleaf definition of the energy density $E$. Using a combination of continuum chiral perturbation theory and a Taylor-series ansatz for the lattice-spacing and strong-coupling dependence, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We determine the scales $\\sqrt{t_0} = 0.1416({}_{-5}^{+8})$ fm and $w_0 = 0.1717({}_{-11}^{+12})$ fm, where the errors are sums, in quadrature, of statistical and all systematic errors. The precision of $w_0$ and $\\sqrt{t_0}$ is comparable to or more precise than...

  5. Local Ensemble Kalman Particle Filters for efficient data assimilation

    CERN Document Server

    Robert, Sylvain

    2016-01-01

    Ensemble methods such as the Ensemble Kalman Filter (EnKF) are widely used for data assimilation in large-scale geophysical applications, as for example in numerical weather prediction (NWP). There is a growing interest for physical models with higher and higher resolution, which brings new challenges for data assimilation techniques because of the presence of non-linear and non-Gaussian features that are not adequately treated by the EnKF. We propose two new localized algorithms based on the Ensemble Kalman Particle Filter (EnKPF), a hybrid method combining the EnKF and the Particle Filter (PF) in a way that maintains scalability and sample diversity. Localization is a key element of the success of EnKFs in practice, but it is much more challenging to apply to PFs. The algorithms that we introduce in the present paper provide a compromise between the EnKF and the PF while avoiding some of the problems of localization for pure PFs. Numerical experiments with a simplified model of cumulus convection based on a...

  6. Transition from Poisson to circular unitary ensemble

    Indian Academy of Sciences (India)

    Vinayak; Akhilesh Pandey

    2009-09-01

    Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.

  7. Data assimilation the ensemble Kalman filter

    CERN Document Server

    Evensen, Geir

    2006-01-01

    Covers data assimilation and inverse methods, including both traditional state estimation and parameter estimation. This text and reference focuses on various popular data assimilation methods, such as weak and strong constraint variational methods and ensemble filters and smoothers.

  8. Irreplaceability of Neuronal Ensembles after Memory Allocation

    Directory of Open Access Journals (Sweden)

    Naoki Matsuo

    2015-04-01

    Full Text Available Lesion studies suggest that an alternative system can compensate for damage to the primary region employed when animals acquire a memory. However, it is unclear whether functional compensation occurs at the cellular ensemble level. Here, we inhibited the activities of a specific subset of neurons activated during initial learning by utilizing a transgenic mouse that expresses tetanus toxin (TeNT under the control of the c-fos promoter. Notably, suppression interfered with relearning while sparing the ability to acquire and express fear memory for a distinct context. These results suggest that the activity of the initial ensemble is preferentially dedicated to the same learning and that it is not replaceable once it is allocated. Our results provide substantial insights into the machinery underlying how the brain allocates individual memories to discrete neuronal ensembles and how it ensures that repetitive learning strengthens memory by reactivating the same neuronal ensembles.

  9. Ensemble Machine Learning Methods and Applications

    CERN Document Server

    Ma, Yunqian

    2012-01-01

    It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics.   Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...

  10. Sensitivity of regional ensemble data assimilation spread to perturbations of lateral boundary conditions

    Directory of Open Access Journals (Sweden)

    Rachida El Ouaraini

    2015-12-01

    Full Text Available The implementation of a regional ensemble data assimilation and forecasting system requires the specification of appropriate perturbations of lateral boundary conditions (LBCs, in order to simulate associated errors. The sensitivity of analysis and 6-h forecast ensemble spread to these perturbations is studied here formally and experimentally by comparing three different LBC configurations for the ensemble data assimilation system of the ALADIN-France limited-area model (LAM. While perturbed initial LBCs are provided by the perturbed LAM analyses in each ensemble, the three ensemble configurations differ with respect to LBCs used at 3- and 6-h forecast ranges, which respectively correspond to: (1 perturbed LBCs provided by the operational global ensemble data assimilation system (GLBC, which is considered as a reference configuration; (2 unperturbed LBCs (ULBC obtained from the global deterministic model; (3 perturbed LBCs obtained by adding random draws of an error covariance model (PLBC to the global deterministic system. A formal analysis of error and perturbation equations is first carried out, in order to provide an insight of the relative effects of observation perturbations and of LBC perturbations at different ranges, in the various ensemble configurations. Horizontal variations of time-averaged ensemble spread are then examined for 6-h forecasts. Despite the use of perturbed initial LBCs, the regional ensemble ULBC is underdispersive not only near the lateral boundaries, but also in approximately one-third of the inner area, due to advection during the data assimilation cycle. This artefact is avoided in PLBC through the additional use of non-zero LBC perturbations at 3- and 6-h ranges, and the sensitivity to the amplitude scaling of the covariance model is illustrated for this configuration. Some aspects of the temporal variation of ensemble spread and associated sensitivities to LBC perturbations are also studied. These results

  11. BRAID: A Unifying Paradigm for the Analysis of Combined Drug Action.

    Science.gov (United States)

    Twarog, Nathaniel R; Stewart, Elizabeth; Hammill, Courtney Vowell; A Shelat, Anang

    2016-01-01

    With combination therapies becoming increasingly vital to understanding and combatting disease, a reliable method for analyzing combined dose response is essential. The importance of combination studies both in basic and translational research necessitates a method that can be applied to a wide range of experimental and analytical conditions. However, despite increasing demand, no such unified method has materialized. Here we introduce the Bivariate Response to Additive Interacting Doses (BRAID) model, a response surface model that combines the simplicity and intuitiveness needed for basic interaction classifications with the versatility and depth needed to analyze a combined response in the context of pharmacological and toxicological constraints. We evaluate the model in a series of simulated combination experiments, a public combination dataset, and several experiments on Ewing's Sarcoma. The resulting interaction classifications are more consistent than those produced by traditional index methods, and show a strong relationship between compound mechanisms and nature of interaction. Furthermore, analysis of fitted response surfaces in the context of pharmacological constraints yields a more concrete prediction of combination efficacy that better agrees with in vivo evaluations. PMID:27160857

  12. An ensemble of dynamic neural network identifiers for fault detection and isolation of gas turbine engines.

    Science.gov (United States)

    Amozegar, M; Khorasani, K

    2016-04-01

    In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. PMID:26881999

  13. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  14. Orbital magnetism in ensembles of ballistic billiards

    International Nuclear Information System (INIS)

    The magnetic response of ensembles of small two-dimensional structures at finite temperatures is calculated. Using semiclassical methods and numerical calculation it is demonstrated that only short classical trajectories are relevant. The magnetic susceptibility is enhanced in regular systems, where these trajectories appear in families. For ensembles of squares large paramagnetic susceptibility is obtained, in good agreement with recent measurements in the ballistic regime. (authors). 20 refs., 2 figs

  15. Coherent ensemble averaging techniques for impedance cardiography

    OpenAIRE

    Hurwitz, Barry E.; Shyu, Liang-Yu; Reddy, Sridhar P; Schneiderman, Neil; Nagel, Joachim H.

    1990-01-01

    EKG synchronized ensemble averaging of the impedance cardiogram tends to blur or suppress signal events due to signal jitter or event latency variability. Although ensemble averaging provides some improvement in the stability of the signal and signal to noise ratio under conditions of nonperiodic influences of respiration and motion, coherent averaging techniques were developed to determine whether further enhancement of the impedance cardiogram could be obtained. Physiological signals were o...

  16. Calibrating ensemble reliability whilst preserving spatial structure

    Directory of Open Access Journals (Sweden)

    Jonathan Flowerdew

    2014-03-01

    Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output

  17. Current path in light emitting diodes based on nanowire ensembles

    International Nuclear Information System (INIS)

    Light emitting diodes (LEDs) have been fabricated using ensembles of free-standing (In, Ga)N/GaN nanowires (NWs) grown on Si substrates in the self-induced growth mode by molecular beam epitaxy. Electron-beam-induced current analysis, cathodoluminescence as well as biased μ-photoluminescence spectroscopy, transmission electron microscopy, and electrical measurements indicate that the electroluminescence of such LEDs is governed by the differences in the individual current densities of the single-NW LEDs operated in parallel, i.e. by the inhomogeneity of the current path in the ensemble LED. In addition, the optoelectronic characterization leads to the conclusion that these NWs exhibit N-polarity and that the (In, Ga)N quantum well states in the NWs are subject to a non-vanishing quantum confined Stark effect. (paper)

  18. Hippocampal ensemble dynamics timestamp events in long-term memory.

    Science.gov (United States)

    Rubin, Alon; Geva, Nitzan; Sheintuch, Liron; Ziv, Yaniv

    2015-01-01

    The capacity to remember temporal relationships between different events is essential to episodic memory, but little is currently known about its underlying mechanisms. We performed time-lapse imaging of thousands of neurons over weeks in the hippocampal CA1 of mice as they repeatedly visited two distinct environments. Longitudinal analysis exposed ongoing environment-independent evolution of episodic representations, despite stable place field locations and constant remapping between the two environments. These dynamics time-stamped experienced events via neuronal ensembles that had cellular composition and activity patterns unique to specific points in time. Temporally close episodes shared a common timestamp regardless of the spatial context in which they occurred. Temporally remote episodes had distinct timestamps, even if they occurred within the same spatial context. Our results suggest that days-scale hippocampal ensemble dynamics could support the formation of a mental timeline in which experienced events could be mnemonically associated or dissociated based on their temporal distance. PMID:26682652

  19. A Multiresolution Ensemble Kalman Filter using Wavelet Decomposition

    CERN Document Server

    Hickmann, Kyle S

    2015-01-01

    We present a method of using classical wavelet based multiresolution analysis to separate scales in model and observations during data assimilation with the ensemble Kalman filter. In many applications, the underlying physics of a phenomena involve the interaction of features at multiple scales. Blending of observational and model error across scales can result in large forecast inaccuracies since large errors at one scale are interpreted as inexact data at all scales. Our method uses a transformation of the observation operator in order to separate the information from different scales of the observations. This naturally induces a transformation of the observation covariance and we put forward several algorithms to efficiently compute the transformed covariance. Another advantage of our multiresolution ensemble Kalman filter is that scales can be weighted independently to adjust each scale's effect on the forecast. To demonstrate feasibility we present applications to a one dimensional Kuramoto-Sivashinsky (...

  20. Ensemble meteorological reconstruction using circulation analogues of 1781–1785

    Directory of Open Access Journals (Sweden)

    P. Yiou

    2013-09-01

    Full Text Available This paper uses a method of atmospheric flow analogues to reconstruct an ensemble of atmospheric variables (namely sea-level pressure, surface temperature and wind speed between 1781 and 1785. The properties of this ensemble are investigated and tested against observations of temperature. The goal of the paper is to assess whether the atmospheric circulation during the Laki volcanic eruption (in 1783 and the subsequent winter were similar to the conditions that prevailed in the winter 2009/2010 and during spring 2010. We find that the three months following the Laki eruption in June 1783 barely have analogues in 2010. The cold winter of 1783/1784 yields circulation analogues in 2009/2010. Our analysis suggests that it is unlikely that the Laki eruption was responsible for the cold winter of 1783/1784, of the relatively short memory of the atmospheric circulation.

  1. Gradient Flow and Scale Setting on MILC HISQ Ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Bazavov, A. [Brookhaven National Lab. (BNL), Upton, NY (United States); Bernard, C. [Washington Univ., St. Louis, MO (United States); Brown, N. [Washington Univ., St. Louis, MO (United States); Komijani, J. [Washington Univ., St. Louis, MO (United States); DeTar, C. [Univ. of Utah, Salt Lake City, UT (United States); Foley, J. [Univ. of Utah, Salt Lake City, UT (United States); Levkova, L. [Univ. of Utah, Salt Lake City, UT (United States); Gottlieb, Steven [Indiana Univ., Bloomington, IN (United States); Heller, U. M. [American Physical Society (APS), Ridge, NY (United States); Laiho, J. [Syracuse Univ., NY (United States); Sugar, R. L. [Univ. of California, Santa Barbara, CA (United States); Toussaint, D. [Univ. of Arizona, Tucson, AZ (United States); Van de Water, R. S. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2015-03-25

    We report on a scale determination with gradient-ow techniques on the Nf = 2 + 1 + 1 HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from approximately 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales p √t0/a and w0/a and their tree-level improvements,√t0;imp and w0;imp, are computed on each ensemble using Symanzik ow and the cloverleaf de_nition of the energy density E. Using a combination of continuum chiral perturbation theory and a Taylor-series ansatz for the lattice-spacing and strong-coupling dependence, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We also determine the scales p t0 = 0:1416(+8-5) fm and w0 = 0:1717(+12-11) fm, where the errors are sums, in quadrature, of statistical and all systematic errors. The precision of w0 and √t0 is comparable to or more precise than the best previous estimates, respectively. We also find the continuum mass-dependence of w0 that will be useful for estimating the scales of other ensembles. Furthermore, we estimate the integrated autocorrelation length of . For long ow times, the autocorrelation length of appears to be comparable to or smaller than that of the topological charge.

  2. Stochastic ensembles, conformationally adaptive teamwork, and enzymatic detoxification.

    Science.gov (United States)

    Atkins, William M; Qian, Hong

    2011-05-17

    It has been appreciated for a long time that enzymes exist as conformational ensembles throughout multiple stages of the reactions they catalyze, but there is renewed interest in the functional implications. The energy landscape that results from conformationlly diverse poteins is a complex surface with an energetic topography in multiple dimensions, even at the transition state(s) leading to product formation, and this represents a new paradigm. At the same time there has been renewed interest in conformational ensembles, a new paradigm concerning enzyme function has emerged, wherein catalytic promiscuity has clear biological advantages in some cases. "Useful", or biologically functional, promiscuity or the related behavior of "multifunctionality" can be found in the immune system, enzymatic detoxification, signal transduction, and the evolution of new function from an existing pool of folded protein scaffolds. Experimental evidence supports the widely held assumption that conformational heterogeneity promotes functional promiscuity. The common link between these coevolving paradigms is the inherent structural plasticity and conformational dynamics of proteins that, on one hand, lead to complex but evolutionarily selected energy landscapes and, on the other hand, promote functional promiscuity. Here we consider a logical extension of the overlap between these two nascent paradigms: functionally promiscuous and multifunctional enzymes such as detoxification enzymes are expected to have an ensemble landscape with more states accessible on multiple time scales than substrate specific enzymes. Two attributes of detoxification enzymes become important in the context of conformational ensembles: these enzymes metabolize multiple substrates, often in substrate mixtures, and they can form multiple products from a single substrate. These properties, combined with complex conformational landscapes, lead to the possibility of interesting time-dependent, or emergent

  3. Ensemble of ground subsidence hazard maps using fuzzy logic

    Science.gov (United States)

    Park, Inhye; Lee, Jiyeong; Saro, Lee

    2014-06-01

    Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.

  4. The assisted prediction modelling frame with hybridisation and ensemble for business risk forecasting and an implementation

    Science.gov (United States)

    Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie

    2015-08-01

    The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.

  5. Meaning of temperature in different thermostatistical ensembles.

    Science.gov (United States)

    Hänggi, Peter; Hilbert, Stefan; Dunkel, Jörn

    2016-03-28

    Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The 'mother' of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative 'absolute' temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra. PMID:26903095

  6. MANAGEMENT AND COMPARATIVE ANALYSIS OF DATASET ENSEMBLES

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Senior Director, Scientific Computing

    2010-05-17

    The primary Phase I technical objective was to develop a prototype that demonstrates the functionality of all components required for an end-to-end meta-data management and comparative visualization system.

  7. Opportunities of fundamental and technical analysis combination in the forecasting of the Ukrainian stock prices

    OpenAIRE

    Inna Voloshyna

    2015-01-01

    The opportunities of fundametal and technical analysis combination in conditions of volatility on the Ukrainian stocks market were analysed in the article. The study determined that political and economic news as macroeconomic factors are the main in explaining of the causes of stock price movements in an unstable situation in the country. Also, the use of technical analysis in the prediction of price movements on the stock market is confirmed and the ways of technical and fundamental market ...

  8. Analysis of general and specific combining abilities of popcorn populations, including selfed parents

    OpenAIRE

    José Marcelo Soriano Viana; Frederico de Pina Matta

    2003-01-01

    Estimation of general and specific combining ability effects in a diallel analysis of cross-pollinating populations, including the selfed parents, is presented in this work. The restrictions considered satisfy the parametric values of the GCA and SCA effects. The method is extended to self-pollinating populations (suitable for other species, without the selfed parents). The analysis of changes in population means due to inbreeding (sensitivity to inbreeding) also permits to assess the predomi...

  9. An Integrated Strategy Framework (ISF) for Combining Porter's 5-Forces, Diamond, PESTEL, and SWOT Analysis

    OpenAIRE

    Anton, Roman

    2015-01-01

    INTRODUCTION Porter's Five-Forces, Porter's Diamond, PESTEL, the 6th-Forths, and Humphrey's SWOT analysis are among the most important and popular concepts taught in business schools around the world. A new integrated strategy framework (ISF) combines all major concepts. PURPOSE Porter's Five-Forces, Porter's Diamond, PESTEL, the 6th-Forths, and Humphrey's SWOT analysis are among the most important and popular concepts taught in business schools around the world. A new integrated strategy fr...

  10. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline

    OpenAIRE

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C.

    2013-01-01

    Background As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the lit...

  11. Meta-Analysis of Pathway Enrichment: Combining Independent and Dependent Omics Data Sets

    OpenAIRE

    Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter

    2014-01-01

    A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics techn...

  12. LINE X TESTER ANALYSIS IN COMBINING ABILITIES ESTIMATION OF SUNFLOWER OIL CONTENT

    OpenAIRE

    Ivica Liović; Miroslav Krizmanić; Anto Mijić; Miroslav Bilandžić; Antonela Markulj; Radovan Marinković; Drena Gadžo

    2012-01-01

    Results of line x tester analysis for 15 sunflower genotypes of the Agricultural Institute Osijek are presented in this paper. Three A lines (cms) and three Rf testers with different oil content level (low, medium, high) in all combinations (nine crosses) were crossed in 2010. In 2011, the lines, testers and their crosses were sown in field trials at two locations (Karanac and Osijek). The oil content was determined after harvesting, whereas line x tester analysis was conducted based on the o...

  13. Fusion Algorithm for Hyperspectral Remote Sensing Image Combined with Harmonic Analysis and Gram-Schmidt Transform

    OpenAIRE

    Zhang, Tao; LIU Jun; Yang, Keming; LUO Wenshan; Zhang, Yuyu

    2015-01-01

    For the defect that harmonic analysis algorithm for hyperspectral image fusion(HAF) in image fusion regardless of spectral reflectance curves, the improved fusion algorithm for hyperspectral remote sensing image combined with harmonic analysis and Gram-Schmidt transform(GSHAF) is proposed in this paper. On the basis of completely retaining waveform of spectrum curve of fused image pixel, GSHAF algorithm can simplify hyperspectral image fusion to between the two-dimensional image by harmonic r...

  14. Orchestrating Distributed Resource Ensembles for Petascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Baldin, Ilya; Mandal, Anirban; Ruth, Paul; Yufeng, Xin

    2014-04-24

    Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstract API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.

  15. Analysis of the Interactions of Botanical Extract Combinations Against the Viability of Prostate Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Lynn S. Adams

    2006-01-01

    Full Text Available Herbal medicines are often combinations of botanical extracts that are assumed to have additive or synergistic effects. The purpose of this investigation was to compare the effect of individual botanical extracts with combinations of extracts on prostate cell viability. We then modeled the interactions between botanical extracts in combination isobolographically. Scutellaria baicalensis, Rabdosia rubescens, Panax-pseudo ginseng, Dendranthema morifolium, Glycyrrhiza uralensis and Serenoa repens were collected, taxonomically identified and extracts prepared. Effects of the extracts on cell viability were quantitated in prostate cell lines using a luminescent ATP cell viability assay. Combinations of two botanical extracts of the four most active extracts were tested in the 22Rv1 cell line and their interactions assessed using isobolographic analysis. Each extract significantly inhibited the proliferation of prostate cell lines in a time- and dose-dependent manner except repens. The most active extracts, baicalensis, D. morifolium, G. uralensis and R. rubescens were tested as two-extract combinations. baicalensis and D. morifolium when combined were additive with a trend toward synergy, whereas D. morifolium and R. rubescens together were additive. The remaining two-extract combinations showed antagonism. The four extracts together were significantly more effective than the two-by-two combinations and the individual extracts alone. Combining the four herbal extracts significantly enhanced their activity in the cell lines tested compared with extracts alone. The less predictable nature of the two-way combinations suggests a need for careful characterization of the effects of each individual herb based on their intended use.

  16. Reliability analysis of production ships with emphasis on load combination and ultimate strength

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaozhi

    1995-05-01

    This thesis deals with ultimate strength and reliability analysis of offshore production ships, accounting for stochastic load combinations, using a typical North Sea production ship for reference. A review of methods for structural reliability analysis is presented. Probabilistic methods are established for the still water and vertical wave bending moments. Linear stress analysis of a midships transverse frame is carried out, four different finite element models are assessed. Upon verification of the general finite element code ABAQUS with a typical ship transverse girder example, for which test results are available, ultimate strength analysis of the reference transverse frame is made to obtain the ultimate load factors associated with the specified pressure loads in Det norske Veritas Classification rules for ships and rules for production vessels. Reliability analysis is performed to develop appropriate design criteria for the transverse structure. It is found that the transverse frame failure mode does not seem to contribute to the system collapse. Ultimate strength analysis of the longitudinally stiffened panels is performed, accounting for the combined biaxial and lateral loading. Reliability based design of the longitudinally stiffened bottom and deck panels is accomplished regarding the collapse mode under combined biaxial and lateral loads. 107 refs., 76 refs., 37 tabs.

  17. Robustness of Ensemble Climate Projections Analyzed with Climate Signal Maps: Seasonal and Extreme Precipitation for Germany

    Directory of Open Access Journals (Sweden)

    Susanne Pfeifer

    2015-05-01

    Full Text Available Climate signal maps can be used to identify regions where robust climate changes can be derived from an ensemble of climate change simulations. Here, robustness is defined as a combination of model agreement and the significance of the individual model projections. Climate signal maps do not show all information available from the model ensemble, but give a condensed view in order to be useful for non-climate scientists who have to assess climate change impact during the course of their work. Three different ensembles of regional climate projections have been analyzed regarding changes of seasonal mean and extreme precipitation (defined as the number of days exceeding the 95th percentile threshold of daily precipitation for Germany, using climate signal maps. Although the models used and the scenario assumptions differ for the three ensembles (representative concentration pathway (RCP 4.5 vs. RCP8.5 vs. A1B, some similarities in the projections of future seasonal and extreme precipitation can be seen. For the winter season, both mean and extreme precipitation are projected to increase. The strength, robustness and regional pattern of this increase, however, depends on the ensemble. For summer, a robust decrease of mean precipitation can be detected only for small regions in southwestern Germany and only from two of the three ensembles, whereas none of them projects a robust increase of summer extreme precipitation.

  18. Forecasting and Analysis of Agricultural Product Logistics Demand in Tibet Based on Combination Forecasting Model

    Institute of Scientific and Technical Information of China (English)

    Wenfeng; YANG

    2015-01-01

    Over the years,the logistics development in Tibet has fallen behind the transport. Since the opening of Qinghai-Tibet Railway in2006,the opportunity for development of modern logistics has been brought to Tibet. The logistics demand analysis and forecasting is a prerequisite for regional logistics planning. By establishing indicator system for logistics demand of agricultural products,agricultural product logistics principal component regression model,gray forecasting model,BP neural network forecasting model are built. Because of the single model’s limitations,quadratic-linear programming model is used to build combination forecasting model to predict the logistics demand scale of agricultural products in Tibet over the next five years. The empirical analysis results show that combination forecasting model is superior to single forecasting model,and it has higher precision,so combination forecasting model will have much wider application foreground and development potential in the field of logistics.

  19. Adoptive immunotherapy combined chemoradiotherapy for non-small-cell lung cancer: a meta-analysis.

    Science.gov (United States)

    Qian, Haili; Wang, Haijuan; Guan, Xiuwen; Yi, Zongbi; Ma, Fei

    2016-06-01

    The aim of this study was to compare the efficacies between adoptive immunotherapy combined chemoradiotherapy and chemoradiotherapy alone in patients with non-small-cell lung cancer (NSCLC). The databases PubMed, EMBASE, and Cochrane database were searched to identify eligible clinical trials. Data analyses were carried out using a comprehensive meta-analysis program, version 2 software. A total of seven articles were finally included in the analysis. Meta-analyses showed that compared with chemoradiotherapy alone, adoptive immunotherapy combined with chemoradiotherapy could improve the 2-year overall survival [odds ratio (OR)=2.45, 95% confidence interval (CI): 1.60-3.75, Pshiver, nausea, fatigue, etc. and severe toxicities were not observed. Adoptive immunotherapy combined with chemoradiotherapy can delay the recurrence of NSCLC and improve survival in patients, where the benefits are even more significant in patients with early-stage NSCLC. PMID:26872311

  20. Circulating antibodies against faecal bacteria assessed by immunomorphometry: combining quantitative immunofluorescence and image analysis.

    OpenAIRE

    Apperloo-Renkema, H.Z.; Wilkinson, M. H. F.; van der Waaij, D

    1992-01-01

    A new technique to study the prevalence of circulating antibodies directed against different morphological groups ('morphotypes') of bacteria in fresh faeces is presented. The technique combines quantitative indirect immunofluorescence with digital image analysis. Plasma antibody titres and patterns of IgA, IgG and IgM isotype against morphotypes of faecal bacteria were determined in ten healthy individuals.

  1. Energetic analysis and optimisation of an integrated coal gasification-combined cycle power plant

    NARCIS (Netherlands)

    Vlaswinkel, E.E.

    1992-01-01

    Methods are presented to analyse and optimise the energetic performance of integrated coal gasification-combined cycle (IGCC) power plants. The methods involve exergy analysis and pinch technology and can be used to identify key process parameters and to generate alternative design options for impro

  2. System for Structural Synthesis Combines Finite-Element Analysis and Optimization Programs

    Science.gov (United States)

    Rogers, J. L., Jr.

    1984-01-01

    Programming System for Structural Synthesis, EAL/PROSSS, provides structural-synthesis capability by combining EAL and CONMIN computer programs with set of interface procedures. EAL is general-purpose finiteelement structural-analysis program; CONMIN is general-purpose optimization program. User supplies two smaller problem-dependent programs to define design variables, constraints, and objective function.

  3. Urban Saturated Power Load Analysis Based on a Novel Combined Forecasting Model

    Directory of Open Access Journals (Sweden)

    Huiru Zhao

    2015-03-01

    Full Text Available Analysis of urban saturated power loads is helpful to coordinate urban power grid construction and economic social development. There are two different kinds of forecasting models: the logistic curve model focuses on the growth law of the data itself, while the multi-dimensional forecasting model considers several influencing factors as the input variables. To improve forecasting performance, a novel combined forecasting model for saturated power load analysis was proposed in this paper, which combined the above two models. Meanwhile, the weights of these two models in the combined forecasting model were optimized by employing a fruit fly optimization algorithm. Using Hubei Province as the example, the effectiveness of the proposed combined forecasting model was verified, demonstrating a higher forecasting accuracy. The analysis result shows that the power load of Hubei Province will reach saturation in 2039, and the annual maximum power load will reach about 78,630 MW. The results obtained from this proposed hybrid urban saturated power load analysis model can serve as a reference for sustainable development for urban power grids, regional economies, and society at large.

  4. Combined sequence-based and genetic mapping analysis of complex traits in outbred rats

    NARCIS (Netherlands)

    Baud, A.; Hermsen, R.; Guryev, V.; Stridh, P.; Graham, D.; McBride, M.W.; Foroud, T.; Calderari, S.; Diez, M.; Ockinger, J.; Beyeen, A.D.; Gillett, A.; Abdelmagid, N.; Guerreiro-Cacais, A.O.; Jagodic, M.; Tuncel, J.; Norin, U.; Beattie, E.; Huynh, N.; Miller, W.H.; Koller, D.L.; Alam, I.; Falak, S.; Osborne-Pellegrin, M.; Martinez-Membrives, E.; Canete, T.; Blazquez, G.; Vicens-Costa, E.; Mont-Cardona, C.; Diaz-Moran, S.; Tobena, A.; Hummel, O.; Zelenika, D.; Saar, K.; Patone, G.; Bauerfeind, A.; Bihoreau, M.T.; Heinig, M.; Lee, Y.A.; Rintisch, C.; Schulz, H.; Wheeler, D.A.; Worley, K.C.; Muzny, D.M.; Gibbs, R.A.; Lathrop, M.; Lansu, N.; Toonen, P.; Ruzius, F.P.; de Bruijn, E.; Hauser, H.; Adams, D.J.; Keane, T.; Atanur, S.S.; Aitman, T.J.; Flicek, P.; Malinauskas, T.; Jones, E.Y.; Ekman, D.; Lopez-Aumatell, R.; Dominiczak, A.F.; Johannesson, M.; Holmdahl, R.; Olsson, T.; Gauguier, D.; Hubner, N.; Fernandez-Teruel, A.; Cuppen, E.; Mott, R.; Flint, J.

    2013-01-01

    Genetic mapping on fully sequenced individuals is transforming understanding of the relationship between molecular variation and variation in complex traits. Here we report a combined sequence and genetic mapping analysis in outbred rats that maps 355 quantitative trait loci for 122 phenotypes. We i

  5. Process Monitoring by combining several signal-analysis results using fuzzy logic

    International Nuclear Information System (INIS)

    In order to improve reliability in detecting anomalies in nuclear power plant performance, a method is presented which is based on acquiring various characteristics of signal data using autoregressive, wavelet and fractal-analysis techniques. These characteristics are combined using a decision making approach based on fuzzy logic. This approach is able to detect and distinguish several system states

  6. Coex-Rank: An approach incorporating co-expression information for combined analysis of microarray data

    OpenAIRE

    Cai, Jinlu; Keen, Henry L.; Sigmund, Curt D.; Casavant, Thomas L.

    2012-01-01

    Microarrays have been widely used to study differential gene expression at the genomic level. They can also provide genome-wide co-expression information. Biologically related datasets from independent studies are publicly available, which requires robust combined approaches for integration and validation. Previously, meta-analysis has been adopted to solve this problem.

  7. Ensemble postprocessing for probabilistic quantitative precipitation forecasts

    Science.gov (United States)

    Bentzien, S.; Friederichs, P.

    2012-12-01

    Precipitation is one of the most difficult weather variables to predict in hydrometeorological applications. In order to assess the uncertainty inherent in deterministic numerical weather prediction (NWP), meteorological services around the globe develop ensemble prediction systems (EPS) based on high-resolution NWP systems. With non-hydrostatic model dynamics and without parameterization of deep moist convection, high-resolution NWP models are able to describe convective processes in more detail and provide more realistic mesoscale structures. However, precipitation forecasts are still affected by displacement errors, systematic biases and fast error growth on small scales. Probabilistic guidance can be achieved from an ensemble setup which accounts for model error and uncertainty of initial and boundary conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) provides such an ensemble system based on the German-focused limited-area model COSMO-DE. With a horizontal grid-spacing of 2.8 km, COSMO-DE is the convection-permitting high-resolution part of the operational model chain at DWD. The COSMO-DE-EPS consists of 20 realizations of COSMO-DE, driven by initial and boundary conditions derived from 4 global models and 5 perturbations of model physics. Ensemble systems like COSMO-DE-EPS are often limited with respect to ensemble size due to the immense computational costs. As a consequence, they can be biased and exhibit insufficient ensemble spread, and probabilistic forecasts may be not well calibrated. In this study, probabilistic quantitative precipitation forecasts are derived from COSMO-DE-EPS and evaluated at more than 1000 rain gauges located all over Germany. COSMO-DE-EPS is a frequently updated ensemble system, initialized 8 times a day. We use the time-lagged approach to inexpensively increase ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Moreover, we will show that statistical

  8. Combined use of factor analysis and cluster analysis in classification of traditional Chinese medical syndromes in patients with posthepatitic cirrhosis

    OpenAIRE

    Zhang, Qin; Liu, Ping

    2005-01-01

    Objective: To explore the significance of the combination of factor analysis and systematic cluster analysis in classification of traditional Chinese medical syndromes in patients with posthepatitic cirrhosis, and to provide a scientific basis for the criterion of the classification. Methods: We designed a clinical questionnaire according to the clinical characteristics and the demands of traditional Chinese medical information collection for patients with posthepatitic cirrhosis. By means of...

  9. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models

    Science.gov (United States)

    Simidjievski, Nikola; Todorovski, Ljupčo; Džeroski, Sašo

    2016-01-01

    Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting), significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient. PMID:27078633

  10. A Unification of Ensemble Square Root Kalman Filters

    OpenAIRE

    Nerger, Lars; Janjic Pfander, Tijana; Schröter, Jens; Hiller, Wolfgang

    2012-01-01

    In recent years, several ensemble-based Kalman filter algorithms have been developed that have been classified as ensemble square-root Kalman filters. Parallel to this development, the SEIK (Singular ``Evolutive'' Interpolated Kalman) filter has been introduced and applied in several studies. Some publications note that the SEIK filter is an ensemble Kalman filter or even an ensemble square-root Kalman filter. This study examines the relation of the SEIK filter to ensemble square-root filters...

  11. On the Convergence of the Ensemble Kalman Filter

    OpenAIRE

    Mandel, Jan; Cobb, Loren; Beezley, Jonathan D.

    2009-01-01

    Convergence of the ensemble Kalman filter in the limit for large ensembles to the Kalman filter is proved. In each step of the filter, convergence of the ensemble sample covariance follows from a weak law of large numbers for exchangeable random variables, the continuous mapping theorem gives convergence in probability of the ensemble members, and $L^p$ bounds on the ensemble then give $L^p$ convergence.

  12. Synthesizing complex movement fragment representations from motor cortical ensembles

    Science.gov (United States)

    Hatsopoulos, Nicholas G.; Amit, Yali

    2011-01-01

    We have previously shown that the responses of primary motor cortical neurons are more accurately predicted if one assumes that individual neurons encode temporally-extensive movement fragments or preferred trajectories instead of static movement parameters (Hatsopoulos et al., 2007). Building on these findings, we examine here how these preferred trajectories can be combined to generate a rich variety of preferred movement trajectories when neurons fire simultaneously. Specifically, we used a generalized linear model to fit each neuron’s spike rate to an exponential function of the inner product between the actual movement trajectory and the preferred trajectory; then, assuming conditional independence, when two neurons fire simultaneously their spiking probabilities multiply implying that their preferred trajectories add. We used a similar exponential model to fit the probability of simultaneous firing and found that the majority of neuron pairs did combine their preferred trajectories using a simple additive rule. Moreover, a minority of neuron pairs that engaged in significant synchronization combined their preferred trajectories through a small scaling adjustment to the additive rule in the exponent, while preserving the predicted trajectory representation from the additive rule. These results suggest that complex movement representations can be synthesized in simultaneously firing neuronal ensembles by adding the trajectory representations of the constituents in the ensemble. PMID:21939762

  13. Thermodynamics and thermo-economic analysis of simple combined cycle with inlet fogging

    International Nuclear Information System (INIS)

    The present study deals with thermodynamic and thermo-economic analysis of simple combined cycle power plant (CCPP), incorporated with gas turbine (GT) blade cooling by means of bleeding of compressed air, and compressor inlet air cooling by means of fogging. This study has been carried out for five configurations, namely, simple gas turbine combined cycle using single pressure heat recovery steam generator (HRSG), that is, (SGTCC1P), simple gas turbine combined cycle using double pressure HRSG (SGTCC2P), simple gas turbine combined cycle using double pressure HRSG with reheat (SGTCC2PR), simple gas turbine combined cycle using triple pressure HRSG (SGTCC3P) and simple gas turbine combined cycle using triple pressure HRSG with reheat (SGTCC3PR), considering the hot day condition(HDC) at ambient temperature of 40 °C and relative humidity of 50%. The thermodynamic analysis has shown that SGTCC3P configuration of simple CCPP incur lowest operating cost per kWh power generation as determined in Fig. 19. This is obvious as evident in the Figs. 5, 6 and 9 of the thermodynamic analysis which show that for a given operating range of GTIT and Cpr, SGTCC3P configuration provides highest specific work and efficiency among all configurations. -- Highlights: ► The present study deals with thermodynamic and thermo-economic analysis of simple combined cycle power plant (CCPP), incorporated with gas turbine (GT) blade cooling by means of bleeding of compressed air, and compressor inlet air cooling by means of fogging. ► Compressor inlet air cooling by means of fogging has increased specific power output of all configurations due to increase mass flow rate of air and decrease the inlet temperature. ► The thermodynamic analysis has shown that simple gas turbine combined cycle using three pressure HRSG (SGTCC3P) configuration of simple CCPP incurs lowest operating cost per kWh power generation as determined in Fig. 19. ► For a given operating range of gas turbine inlet

  14. Combined Water-Oxygen Pinch Analysis with Mathematical Programming for Wastewater Treatment

    Institute of Scientific and Technical Information of China (English)

    宋丽丽; 都健; 柴绍斌; 姚平经

    2006-01-01

    Water-oxygen pinch analysis is an effective method to decrease the wastewater quantity and improve the wastewater quality. But when multiple-contaminants are present, the method is difficult to be carried out. In this paper, the method that combines water-oxygen pinch analysis with mathematical programming is proposed. It obtains the general optimal solution and leads to the reuse stream that cannot be found only by pinch analysis. The new method is illustrated by an example, and the annual cost is reduced by 8.43% compared with the solution of literature.

  15. The analysis of uranium in environmental sample by mass spectrometer combined with isotopic dilution

    International Nuclear Information System (INIS)

    Uranium in the environmental sample was analyzed by mass spectrometer combined with isotopic dilution. Before mass spectrometer analysis, samples were dissolved in a concentrated acidic solution containing HNO3, HF and HClO4 and chemically processed to suit the analysis requirement. Analysis results indicated that the uranium content was 0.08 μg/g in river water, 0.1 μg/g in evergreen foliage, and 5-11 μg/g in surface soil respectively. (authors)

  16. Phosphoproteomic analysis of the response of maize leaves to drought, heat and their combination stress

    Directory of Open Access Journals (Sweden)

    Xiuli eHu

    2015-05-01

    Full Text Available Drought and heat stress, especially their combination, greatly affect crop production. Many studies have described transcriptome, proteome and phosphoproteome changes in response of plants to drought or heat stress. However, the study about the phosphoproteomic changes in response of crops to the combination stress is scare. To understand the mechanism of maize responses to the drought and heat combination stress, phosphoproteomic analysis was performed on maize leaves by using multiplex iTRAQ-based quantitative proteomic and LC-MS/MS methods. Five-leaf-stage maize was subjected to drought, heat or their combination, and the leaves were collected. Globally, heat, drought and the combined stress significantly changed the phosphorylation levels of 172, 149 and 144 phosphopeptides, respectively. These phosphopeptides corresponded to 282 proteins. Among them, 23 only responded to the combined stress and could not be predicted from their responses to single stressors; 30 and 75 only responded to drought and heat, respectively. Notably, 19 proteins were phosphorylated on different sites in response to the single and combination stresses. Of the seven significantly enriched phosphorylation motifs identified, two were common for all stresses, two were common for heat and the combined stress, and one was specific to the combined stress. The signaling pathways in which the phosphoproteins were involved clearly differed among the three stresses. Functional characterization of the phosphoproteins and the pathways identified here could lead to new targets for the enhancement of crop stress tolerance, which will be particularly important in the face of climate change and the increasing prevalence of abiotic stressors.

  17. Combined Analysis of all Three Phases of Solar Neutrino Data from the Sudbury Neutrino Observatory

    CERN Document Server

    Aharmim, B; Anthony, A E; Barros, N; Beier, E W; Bellerive, A; Beltran, B; Bergevin, M; Biller, S D; Boudjemline, K; Boulay, M G; Cai, B; Chan, Y D; Chauhan, D; Chen, M; Cleveland, B T; Cox, G A; Dai, X; Deng, H; Detwiler, J A; DiMarco, M; Doe, P J; Doucas, G; Drouin, P -L; Duncan, F A; Dunford, M; Earle, E D; Elliott, S R; Evans, H C; Ewan, G T; Farine, J; Fergani, H; Fleurot, F; Ford, R J; Formaggio, J A; Gagnon, N; Goon, J TM; Graham, K; Guillian, E; Habib, S; Hahn, R L; Hallin, A L; Hallman, E D; Harvey, P J; Hazama, R; Heintzelman, W J; Heise, J; Helmer, R L; Hime, A; Howard, C; Huang, M; Jagam, P; Jamieson, B; Jelley, N A; Jerkins, M; Keeter, K J; Klein, J R; Kormos, L L; Kos, M; Kraus, C; Krauss, C B; Kruger, A; Kutter, T; Kyba, C C M; Lange, R; Law, J; Lawson, I T; Lesko, K T; Leslie, J R; Loach, J C; MacLellan, R; Majerus, S; Mak, H B; Maneira, J; Martin, R; McCauley, N; McDonald, A B; McGee, S R; Miller, M L; Monreal, B; Monroe, J; Nickel, B G; Noble, A J; O'Keeffe, H M; Oblath, N S; Ollerhead, R W; Gann, G D Orebi; Oser, S M; Ott, R A; Peeters, S J M; Poon, A W P; Prior, G; Reitzner, S D; Rielage, K; Robertson, B C; Robertson, R G H; Rosten, R C; Schwendener, M H; Secrest, J A; Seibert, S R; Simard, O; Simpson, J J; Skensved, P; Sonley, T J; Stonehill, L C; Tešić, G; Tolich, N; Tsui, T; Van Berg, R; VanDevender, B A; Virtue, C J; Tseung, H Wan Chan; Wark, D L; Watson, P J S; Wendland, J; West, N; Wilkerson, J F; Wilson, J R; Wouters, J M; Wright, A; Yeh, M; Zhang, F; Zuber, K

    2011-01-01

    We report results from a combined analysis of solar neutrino data from all phases of the Sudbury Neutrino Observatory. By exploiting particle identification information obtained from the proportional counters installed during the third phase, this analysis improved background rejection in that phase of the experiment. The combined analysis resulted in a total flux of active neutrino flavors from 8B decays in the Sun of (5.25 \\pm 0.16(stat.)+0.11-0.13(syst.))\\times10^6 cm^{-2}s^{-1}. A two-flavor neutrino oscillation analysis yielded \\Deltam^2_{21} = (5.6^{+1.9}_{-1.4})\\times10^{-5} eV^2 and tan^2{\\theta}_{12}= 0.427^{+0.033}_{-0.029}. A three-flavor neutrino oscillation analysis combining this result with results of all other solar neutrino experiments and the KamLAND experiment yielded \\Deltam^2_{21} = (7.41^{+0.21}_{-0.19})\\times10^{-5} eV^2, tan^2{\\theta}_{12} = 0.446^{+0.030}_{-0.029}, and sin^2{\\theta}_{13} = (2.5^{+1.8}_{-1.5})\\times10^{-2}. This implied an upper bound of sin^2{\\theta}_{13} < 0.053 a...

  18. Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time

    International Nuclear Information System (INIS)

    The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so for small ensemble size. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with respect to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz '63 and '96 models, together with the incompressible Navier–Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier–Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise. (paper)

  19. Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation.

    Science.gov (United States)

    Haberman, Jason; Brady, Timothy F; Alvarez, George A

    2015-04-01

    Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. PMID:25844624

  20. Kanglaite injection combined with hepatic arterial intervention for unresectable hepatocellular carcinoma: A meta-analysis

    Directory of Open Access Journals (Sweden)

    Fei Fu

    2014-01-01

    Full Text Available Objective: The purpose of this study was to evaluate the Kanglaite (KLT injection combined with hepatic arterial intervention for treatment of unresectable hepatocellular carcinoma (HCC by meta-analysis. Materials and Methods: Computerized bibliographic searching were undertaken to identify all eligible published studies about the KLT injection combined with hepatic arterial intervention for unresectable hepatocellular carcinoma (HCC. PubMed, EMBASE, Chinese National Knowledge Infrastructure (CNKI and Wanfang databases were all searched to included the suitable trails. The odds ratios (ORs and its corresponding 95% confidence intervals (95% CIs were calculated as effect size with fixed-effect or random-effect models according to the heterogeneity test across the studies. Results: Nine trails were finally included in this meta-analysis. The objective response rate (ORR was significant improved in the group of KLT injection combined with hepatic arterial intervention compared to hepatic arterial intervention alone (OR =1.80, 95% CI:1.18-2.75, P < 0.05; The combined treatment can significant improve the KSP score (OR = 3.22, 95% CI:1.36-7.60, P < 0.05 and relief the pain of patients compared to that in single treatment (OR = 2.57, 95% CI:1.65-3.99, P < 0.05. Conclusion: KLT injection combined with hepatic arterial intervention can improve the short-term clinical efficacy, quality of life, and decrease the pain of patients with unresectable HCC.

  1. Meta-analysis of individual and combined effects of mycotoxins on growing pigs

    Directory of Open Access Journals (Sweden)

    Ines Andretta

    2016-08-01

    Full Text Available ABSTRACT Little is known about the toxicity of concomitantly occurring mycotoxins in pig diets. This study was conducted to evaluate, through meta-analysis, the individual and the combined effects of mycotoxins on pig performance. The meta-analysis followed three sequential analyses (graphical, correlation, and variance-covariance based on a database composed of 85 published papers, 1,012 treatments and 13,196 animals. Contamination of diets with individual mycotoxins reduced (p < 0.05 feed intake by 14 % and weight gain by 17 %, while combined mycotoxins reduced the same responses by 42 % and 45 %, respectively, in comparison with the non-challenged group. The correlation (p < 0.05 between reduction in weight gain (ΔG and reduction in feed intake (ΔFI was 0.67 in individual challenges and 0.93 in combined challenges. The estimated ΔG was –6 % in individual challenges and –7 % in combined challenges when ΔFI was zero, suggesting an increase in the maintenance requirements of challenged animals. Most of ΔG (58 % in individual challenges and 84 % in combined challenges was attributed to the changes in feed efficiency. The association of mycotoxins enhances individual toxic effects and the ΔFI is important in explaining the deleterious effects on the growth of challenged pigs.

  2. A Bayes fusion method based ensemble classification approach for Brown cloud application

    Directory of Open Access Journals (Sweden)

    M.Krishnaveni

    2014-03-01

    Full Text Available Classification is a recurrent task of determining a target function that maps each attribute set to one of the predefined class labels. Ensemble fusion is one of the suitable classifier model fusion techniques which combine the multiple classifiers to perform high classification accuracy than individual classifiers. The main objective of this paper is to combine base classifiers using ensemble fusion methods namely Decision Template, Dempster-Shafer and Bayes to compare the accuracy of the each fusion methods on the brown cloud dataset. The base classifiers like KNN, MLP and SVM have been considered in ensemble classification in which each classifier with four different function parameters. From the experimental study it is proved, that the Bayes fusion method performs better classification accuracy of 95% than Decision Template of 80%, Dempster-Shaferof 85%, in a Brown Cloud image dataset.

  3. Using probabilistic climate change information from a multimodel ensemble for water resources assessment

    Science.gov (United States)

    Manning, L. J.; Hall, J. W.; Fowler, H. J.; Kilsby, C. G.; Tebaldi, C.

    2009-11-01

    Increasing availability of ensemble outputs from general circulation models (GCMs) and regional climate models (RCMs) permits fuller examination of the implications of climate uncertainties in hydrological systems. A Bayesian statistical framework is used to combine projections by weighting and to generate probability distributions of local climate change from an ensemble of RCM outputs. A stochastic weather generator produces corresponding daily series of rainfall and potential evapotranspiration, which are input into a catchment rainfall-runoff model to estimate future water abstraction availability. The method is applied to the Thames catchment in the United Kingdom, where comparison with previous studies shows that different downscaling methods produce significantly different flow predictions and that this is partly attributable to potential evapotranspiration predictions. An extended sensitivity test exploring the effect of the weights and assumptions associated with combining climate model projections illustrates that under all plausible assumptions the ensemble implies a significant reduction in catchment water resource availability.

  4. A standardised, holistic framework for concept-map analysis combining topological attributes and global morphologies

    Directory of Open Access Journals (Sweden)

    Stefan Yoshi Buhmann

    2015-03-01

    Full Text Available Motivated by the diverse uses of concept maps in teaching and educational research, we have developed a systematic approach to their structural analysis. The basis for our method is a unique topological normalisation procedure whereby a concept map is first stripped of its content and subsequently geometrically re-arranged into a standardised layout as a maximally balanced tree following set rules. This enables a quantitative analysis of the normalised maps to read off basic structural parameters: numbers of concepts and links, diameter, in- and ex-radius and degree sequence and subsequently calculate higher parameters: cross-linkage, balance and dimension. Using these parameters, we define characteristic global morphologies: ‘Disconnected’, ‘Imbalanced’, ‘Broad’, ‘Deep’ and ‘Interconnected’ in the normalised map structure. Our proposed systematic approach to concept-map analysis combining topological normalisation, determination of structural parameters and global morphological classification is a standardised, easily applicable and reliable framework for making the inherent structure of a concept map tangible. It overcomes some of the subjectivity inherent in analysing and interpreting maps in their original form while also avoiding the pitfalls of an atomistic analysis often accompanying quantitative concept-map analysis schemes. Our framework can be combined and cross-compared with a content analysis to obtain a coherent view of the two key elements of a concept map: structure and content. The informed structural analysis may form the starting point for interpreting the underlying knowledge structures and pedagogical meanings.

  5. Dynamic infrared imaging in identification of breast cancer tissue with combined image processing and frequency analysis.

    Science.gov (United States)

    Joro, R; Lääperi, A-L; Soimakallio, S; Järvenpää, R; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Dastidar, P

    2008-01-01

    Five combinations of image-processing algorithms were applied to dynamic infrared (IR) images of six breast cancer patients preoperatively to establish optimal enhancement of cancer tissue before frequency analysis. mid-wave photovoltaic (PV) IR cameras with 320x254 and 640x512 pixels were used. The signal-to-noise ratio and the specificity for breast cancer were evaluated with the image-processing combinations from the image series of each patient. Before image processing and frequency analysis the effect of patient movement was minimized with a stabilization program developed and tested in the study by stabilizing image slices using surface markers set as measurement points on the skin of the imaged breast. A mathematical equation for superiority value was developed for comparison of the key ratios of the image-processing combinations. The ability of each combination to locate the mammography finding of breast cancer in each patient was compared. Our results show that data collected with a 640x512-pixel mid-wave PV camera applying image-processing methods optimizing signal-to-noise ratio, morphological image processing and linear image restoration before frequency analysis possess the greatest superiority value, showing the cancer area most clearly also in the match centre of the mammography estimation. PMID:18666012

  6. Cost-effectiveness analysis of combination therapies for visceral leishmaniasis in the Indian subcontinent.

    Directory of Open Access Journals (Sweden)

    Filip Meheus

    Full Text Available BACKGROUND: Visceral leishmaniasis is a systemic parasitic disease that is fatal unless treated. We assessed the cost and cost-effectiveness of alternative strategies for the treatment of visceral leishmaniasis in the Indian subcontinent. In particular we examined whether combination therapies are a cost-effective alternative compared to monotherapies. METHODS AND FINDINGS: We assessed the cost-effectiveness of all possible mono- and combination therapies for the treatment of visceral leishmaniasis in the Indian subcontinent (India, Nepal and Bangladesh from a societal perspective using a decision analytical model based on a decision tree. Primary data collected in each country was combined with data from the literature and an expert poll (Delphi method. The cost per patient treated and average and incremental cost-effectiveness ratios expressed as cost per death averted were calculated. Extensive sensitivity analysis was done to evaluate the robustness of our estimations and conclusions. With a cost of US$92 per death averted, the combination miltefosine-paromomycin was the most cost-effective treatment strategy. The next best alternative was a combination of liposomal amphotericin B with paromomycin with an incremental cost-effectiveness of $652 per death averted. All other strategies were dominated with the exception of a single dose of 10mg per kg of liposomal amphotericin B. While strategies based on liposomal amphotericin B (AmBisome were found to be the most effective, its current drug cost of US$20 per vial resulted in a higher average cost-effectiveness. Sensitivity analysis showed the conclusion to be robust to variations in the input parameters over their plausible range. CONCLUSIONS: Combination treatments are a cost-effective alternative to current monotherapy for VL. Given their expected impact on the emergence of drug resistance, a switch to combination therapy should be considered once final results from clinical trials are

  7. Genetic Analysis and Combining Ability Studies for Yield Related Characters in Rapeseed

    Directory of Open Access Journals (Sweden)

    Aamar Shehzad

    2015-09-01

    Full Text Available Combining ability analysis has a key position in rapeseed breeding. To estimate the combining ability effects for yield controlling traits in rapeseed, three testers and five lines were crossed using line × tester design in randomized complete block design with three replications. Mean sum of squares of analysis of variances for genotypes were significant for all of the traits; indicating the presence of significant genetic variation. All the interactions between lines and testers exhibited significant results of mean sum of squares for combining ability. Line ‘Duncled’ was found good general combiner for decreased Plant height (PH:-2.0, Days taken to 50% flowering (DF: -15.8 and Days taken to maturity (DM:-3.4 while tester ‘Punjab Sarson” for increased Number of seed/siliqua (SS: 2.2, Number of siliquae/plant (SP: 2.2 and decreased DF (-3.0 traits. Significant general and specific combining ability effects were observed. The best hybrid combination on the basis of specific combining ability effects was “Durre-NIFA × ZN-M-6” for Seed yield/plant (SY: 2.7, DF (-6.1 and DM (-3.5. PH (-0.2, Siliqua length (SL: -0.1, SS (-0.03 and SY (0.2 showed non-additive genetic effects. The half of the characters revealed additive and remaining half showed non-additive genetic effects. The present study unveiled the importance of both type of genetic effects demanding the application of integrated breeding approaches for exploiting the variability. ‘Punjab Sarson × ZN-M-6’ exposed maximum SS (30 and SP (837. Maximum SY (75.9g and minimum DF (64 were showed by ‘Legend × Duncled’. The present research delivers valuable information of genotypes for promoting yield by means of improving yield related characters.

  8. A Hybrid Ensemble Learning Approach to Star-Galaxy Classification

    CERN Document Server

    Kim, Edward J; Kind, Matias Carrasco

    2015-01-01

    There exist a variety of star-galaxy classification techniques, each with their own strengths and weaknesses. In this paper, we present a novel meta-classification framework that combines and fully exploits different techniques to produce a more robust star-galaxy classification. To demonstrate this hybrid, ensemble approach, we combine a purely morphological classifier, a supervised machine learning method based on random forest, an unsupervised machine learning method based on self-organizing maps, and a hierarchical Bayesian template fitting method. Using data from the CFHTLenS survey, we consider different scenarios: when a high-quality training set is available with spectroscopic labels from DEEP2, SDSS, VIPERS, and VVDS, and when the demographics of sources in a low-quality training set do not match the demographics of objects in the test data set. We demonstrate that our Bayesian combination technique improves the overall performance over any individual classification method in these scenarios. Thus, s...

  9. Combining ability analysis for seed and seedling vigor traits in rice (oryza sativa l.)

    International Nuclear Information System (INIS)

    Combining ability analysis was made in a 7x7 diallel cross for rate of germination index, seedling root length, seedling shoot length and seedling dry weight. Variances of general and specific combining ability were highly significant for all the traits indicating both additive and non-additive type of gene action. Higher magnitude of variances due to 'gca' suggested preponderance of the additive type of gene action except for seedling dry weight. IR25924-92-1-3 and TNAU (AD) 103 were good general combiners for rate of germination index where as IR50 and IR31779-19-3-3-2 were good general combiners for most of the traits studied. TNAU (AD) 103 x M148 and IR50 x IR9764-45-2-2 were the best specific combinations for R.G index. The best combinations R.G. index for seedling root length, seedling shoot length and seedling dry weight were TNAU (AD) 103 x M 148, IR9764-45-2-2 x M148, IR21820-154-3-2-2- x M148 and IR9764-45-2-2 x M148 respectively. (author)

  10. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    Science.gov (United States)

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-01

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem. PMID:26709623

  11. Long term Combination of Tide Gauge Benchmark Monitoring (TIGA) Analysis Center Products

    Science.gov (United States)

    Teferle, F. N.; Hunegnaw, A.

    2015-12-01

    The International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Working Group (WG) has recently finallized their reprocessing campaign, using all relevant Global Positioning System (GPS) observations from 1995 to 2014. This re-processed dataset will provide high quality estimates of land motions, enabling regional and global high-precision geophysical/geodeticstudies. Several of the individual TIGA Analysis Centers (TACs) have completed processing the full history of GPS observations recorded by the IGS global network, as well as, many other GPS stationsat or close to tide gauges, which are available from the TIGA data centre at the University of La Rochelle (www.sonel.org). The TAC solutions contain a total of over 700 stations. Following the recentimprovements in processing models and strategies, this is the first complete reprocessing attempt by the TIGA WG to provide homogeneous position time series. The TIGA Combination Centre (TCC) atthe University of Luxembourg (UL) has computed a first multi-year weekly combined solution using two independent combination software packages: CATREF and GLOBK. These combinations allow anevaluation of any effects from the combination software and of the individual TAC contributions and their influences on the combined solution. In this study we will present the first UL TIGA multi-yearcombination results and discuss these in terms of geocentric sea level changes.

  12. A Unified MGF-Based Capacity Analysis of Diversity Combiners over Generalized Fading Channels

    CERN Document Server

    Yilmaz, Ferkan

    2010-01-01

    Unified exact average capacity results for L-branch coherent diversity receivers including equal-gain combining (EGC) and maximal-ratio combining (MRC) are not known. This paper develops a novel generic framework for the capacity analysis of $L$-branch EGC/MRC over generalized fading channels. The framework is used to derive new results for the Gamma shadowed generalized Nakagami-m fading model which can be a suitable model for the fading environments encountered by high frequency (60 GHz and above) communications. The mathematical formalism is illustrated with some selected numerical and simulation results confirming the correctness of our newly proposed framework.

  13. Simulations in generalized ensembles through noninstantaneous switches

    Science.gov (United States)

    Giovannelli, Edoardo; Cardini, Gianni; Chelli, Riccardo

    2015-10-01

    Generalized-ensemble simulations, such as replica exchange and serial generalized-ensemble methods, are powerful simulation tools to enhance sampling of free energy landscapes in systems with high energy barriers. In these methods, sampling is enhanced through instantaneous transitions of replicas, i.e., copies of the system, between different ensembles characterized by some control parameter associated with thermodynamical variables (e.g., temperature or pressure) or collective mechanical variables (e.g., interatomic distances or torsional angles). An interesting evolution of these methodologies has been proposed by replacing the conventional instantaneous (trial) switches of replicas with noninstantaneous switches, realized by varying the control parameter in a finite time and accepting the final replica configuration with a Metropolis-like criterion based on the Crooks nonequilibrium work (CNW) theorem. Here we revise these techniques focusing on their correlation with the CNW theorem in the framework of Markovian processes. An outcome of this report is the derivation of the acceptance probability for noninstantaneous switches in serial generalized-ensemble simulations, where we show that explicit knowledge of the time dependence of the weight factors entering such simulations is not necessary. A generalized relationship of the CNW theorem is also provided in terms of the underlying equilibrium probability distribution at a fixed control parameter. Illustrative calculations on a toy model are performed with serial generalized-ensemble simulations, especially focusing on the different behavior of instantaneous and noninstantaneous replica transition schemes.

  14. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    Science.gov (United States)

    Wood, A. W.; Thielen, J.; Pappenberger, F.; Schaake, J. C.; Hartman, R. K.

    2012-12-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF). With support from the US National Weather Service (NWS) and the European Commission (EC), the HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support in emergency management and water resources sectors. The strategy to meet this goal includes meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. HEPEX has organized about a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Today, the HEPEX mission is to demonstrate the added value of hydrological ensemble prediction systems (HEPS) for emergency management and water resources sectors to make decisions that have important consequences for economy, public health, safety, and the environment. HEPEX is now organised around six major themes that represent core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  15. Optimization of Quantitative MGMT Promoter Methylation Analysis Using Pyrosequencing and Combined Bisulfite Restriction Analysis

    OpenAIRE

    Mikeska, Thomas; Bock, Christoph; El-Maarri, Osman; Hübner, Anika; Ehrentraut, Denise; Schramm, Johannes; Felsberg, Jörg; Kahl, Philip; Büttner, Reinhard; Pietsch, Torsten; Waha, Andreas

    2007-01-01

    Resistance to chemotherapy is a major complication during treatment of cancer patients. Hypermethylation of the MGMT gene alters DNA repair and is associated with longer survival of glioblastoma patients treated with alkylating agents. Therefore, MGMT promoter methylation plays an important role as a predictive biomarker for chemotherapy resistance. To adopt this established correlation into a molecular diagnosis procedure, we compared and optimized three experimental techniques [combined bis...

  16. A Online NIR Sensor for the Pilot-Scale Extraction Process in Fructus Aurantii Coupled with Single and Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Xiaoning Pan

    2015-04-01

    Full Text Available Model performance of the partial least squares method (PLS alone and bagging-PLS was investigated in online near-infrared (NIR sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS and moving window partial least squares (MWPLS variable selection methods were compared. Single quantification models (PLS and ensemble methods combined with partial least squares (bagging-PLS were developed for quantitative analysis of naringin, hesperidin and neohesperidin. SiPLS was compared to SiPLS combined with bagging-PLS. Final results showed the root mean square error of prediction (RMSEP of bagging-PLS to be lower than that of PLS regression alone. For this reason, an ensemble method of online NIR sensor is here proposed as a means of monitoring the pilot-scale extraction process in Fructus aurantii, which may also constitute a suitable strategy for online NIR monitoring of CHM.

  17. Combination of Tide Gauge Benchmark Monitoring (TIGA) Analysis Center from repro2 solutions

    Science.gov (United States)

    Hunegnaw, Addisu; Teferle, Felix Norman

    2016-04-01

    Recently the International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Working Group (WG) has completed their repro2 solutions by re-analyzing the full history of all relevant Global Positioning System (GPS) observations from 1995 to 2015. This re-processed data set will provide high-quality estimates of vertical land movements for more than 500 stations, enabling regional and global high-precision geophysical/geodetic studies. All the TIGA Analysis Centres (TACs) have processed the observations recorded by GPS stations at or close to tide gauges, which are available from the TIGA Data Center at the University of La Rochelle (www.sonel.org) besides those of the global IGS core network used for its reference frame implementations. Following the recent improvements in processing models, strategies (http://acc.igs.org/reprocess2.html), this is the first complete re-processing attempt by the TIGA WG to provide homogeneous position time series relevant to sea level changes. In this study we report on a first multi-year daily combined solution from the TIGA Combination Centre (TCC) at the University of Luxembourg (UL) with respect to the latest International Terrestrial Reference Frame (ITRF2014). Using two independent combination software packages, CATREF and GLOBK, we have computed a first daily combined solution from TAC solutions already available to the TIGA WG. These combinations allow an evaluation of any effects from the combination software and of the individual TAC parameters and their influences on the combined solution with respect to the latest ITRF2014. Some results of the UL TIGA multi-year combinations in terms of geocentric sea level changes will be presented and discussed.

  18. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    Science.gov (United States)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  19. CALA:A Web Analysis Algorithm Combined with Content Correlation Analysis Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ling(张岭); MA FanYuan(马范援); YE YunMing(叶允明); CHEN JianGuo(陈建国)

    2003-01-01

    Web hyperlink structure analysis algorithm plays a significant role in improving the precision of Web information retrieval. Current link algorithms employ iteration function to compute the Web resource weight. The major drawback of this approach is that every Web document has a fixed rank which is independent of Web queries. This paper proposes an improved algorithm that ranks the quality and the relevance of a page according to users' query dynamically.The experiments show that the current link analysis algorithm is improved.

  20. A CAD system to analyse mammogram images using fully complex-valued relaxation neural network ensembled classifier.

    Science.gov (United States)

    Saraswathi, D; Srinivasan, E

    2014-10-01

    This paper presents a new improved classification technique using the Fully Complex-Valued Relaxation Neural Networks (FCRN) based ensemble technique for classifying mammogram images. The system is developed based on three stages of Breast cancer, namely Normal, Benign and Malignant, defined by the MIAS database. Features like Binary object Features, RST Invariant Features, Histogram Features, Texture Features and Spectral Features are extracted from the MIAS database. Extracted features are then given to the proposed FCRN-based ensemble classifier. FCRN networks are ensembled together for improving the classification rate. Receiver Operating Characteristic (ROC) analysis is used for evaluating the system. The results illustrate the superior classification performance of the ensembled FCRN. Performance comparison of various sets of training and testing vectors are provided for FCRN classifier. The resultant ensembled FCRN approximates the desired output more accurately with a lower computational effort. PMID:25101825

  1. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    Science.gov (United States)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; Liu, X.; Nielsen, B. J.; Petch, J.; Plant, R. S.; Singh, M. S.; Shi, X.; Song, X.; Wang, W.; Whithall, M. A.; Wolf, A.; Xie, S.; Zhang, G.

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  2. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  3. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancerdiseasesis challenging job in biomedical dataengineering. The improving of classification of geneselection of cancer diseases various classifier areused, but the classification of classifier are notvalidate. So ensemble classifier is used for cancergene classification using neural network classifierwith random forest tree. The random forest tree isensembling technique of classifier in this techniquethe number of classifier ensemble of their leaf nodeof class of classifier. In this paper we combinedneuralnetwork with random forest ensembleclassifier for classification of cancer gene selectionfor diagnose analysis of cancer diseases.Theproposed method is different from most of themethods of ensemble classifier, which follow aninput output paradigm ofneural network, where themembers of the ensemble are selected from a set ofneural network classifier. the number of classifiersis determined during the rising procedure of theforest. Furthermore, the proposed method producesan ensemble not only correct, but also assorted,ensuring the two important properties that shouldcharacterize an ensemble classifier. For empiricalevaluation of our proposed method we used UCIcancer diseases data set for classification. Ourexperimental result shows that betterresult incompression of random forest tree classification

  4. Clustering-based selective neural network ensemble

    Institute of Scientific and Technical Information of China (English)

    FU Qiang; HU Shang-xu; ZHAO Sheng-ying

    2005-01-01

    An effective ensemble should consist of a set of networks that are both accurate and diverse. We propose a novel clustering-based selective algorithm for constructing neural network ensemble, where clustering technology is used to classify trained networks according to similarity and optimally select the most accurate individual network from each cluster to make up the ensemble. Empirical studies on regression of four typical datasets showed that this approach yields significantly smaller en semble achieving better performance than other traditional ones such as Bagging and Boosting. The bias variance decomposition of the predictive error shows that the success of the proposed approach may lie in its properly tuning the bias/variance trade-offto reduce the prediction error (the sum of bias2 and variance).

  5. Luminescence simulations of ensembles of silicon nanocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Lockwood, Ross; Meldrum, Al [Department of Physics, University of Alberta, Edmonton (Canada)

    2009-05-15

    The luminescence of silicon nanocrystals (NCs) has attracted a great deal of interest due to the numerous potential photonic applications of light-emitting silicon. However, the excitation mechanisms and cluster-cluster interactions in densely-packed ensembles, as well as the recombination processes that influence the emission spectrum and lifetime are not yet well understood. In order to generate a more complete picture of the controlling parameters in the luminescence, a dynamic Monte Carlo model that incorporates several key physical processes for luminescent nanocrystal ensembles is developed. The model simulates Forster-type multipole energy transfer, tunnelling interactions, radiative decay and non-radiative trapping in physically realistic (lognormal) distributions of silicon NCs. The results of the simulation illustrate the effects of the NC size distribution, homogeneous and inhomogeneous broadening, NC packing density, and non-radiative trapping on the ensemble luminescence spectrum. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  6. Matrix averages relating to Ginibre ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Forrester, Peter J [Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia); Rains, Eric M [Department of Mathematics, California Institute of Technology, Pasadena, CA 91125 (United States)], E-mail: p.forrester@ms.unimelb.edu.au

    2009-09-25

    The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.

  7. Control and Synchronization of Neuron Ensembles

    CERN Document Server

    Li, Jr-Shin; Ruths, Justin

    2011-01-01

    Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.

  8. Embedded random matrix ensembles in quantum physics

    CERN Document Server

    Kota, V K B

    2014-01-01

    Although used with increasing frequency in many branches of physics, random matrix ensembles are not always sufficiently specific to account for important features of the physical system at hand. One refinement which retains the basic stochastic approach but allows for such features consists in the use of embedded ensembles.  The present text is an exhaustive introduction to and survey of this important field. Starting with an easy-to-read introduction to general random matrix theory, the text then develops the necessary concepts from the beginning, accompanying the reader to the frontiers of present-day research. With some notable exceptions, to date these ensembles have primarily been applied in nuclear spectroscopy. A characteristic example is the use of a random two-body interaction in the framework of the nuclear shell model. Yet, topics in atomic physics, mesoscopic physics, quantum information science and statistical mechanics of isolated finite quantum systems can also be addressed using these ensemb...

  9. Age estimation in forensic sciences: Application of combined aspartic acid racemization and radiocarbon analysis

    Energy Technology Data Exchange (ETDEWEB)

    Alkass, K; Buchholz, B A; Ohtani, S; Yamamoto, T; Druid, H; Spalding, S L

    2009-11-02

    Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster, since the age at death, birth date and year of death, as well as gender, can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization has shown reproducible and more precise results. In this paper we analyze teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that above-ground testing of nuclear weapons during the cold war (1955-1963) caused an extreme increase in global levels of carbon-14 ({sup 14}C) which have been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel and ten of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2=0.66, p < 0.05). Radiocarbon analysis showed an excellent precision with an overall absolute error of 0.6 {+-} 04 years. Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 {+-} 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification.

  10. Ensemble Kalman methods for inverse problems

    International Nuclear Information System (INIS)

    The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)

  11. Basis of combined Pinch Technology and exergy analysis and its application to energy industry in Mexico

    International Nuclear Information System (INIS)

    The energy industry scheme in Mexico has an enormous potential to do re adaptations with the intention of increase the efficiency in the use of energy. One of the most modern engineering tools to make such re adaptations consist in a suitable combination of analysis of exergy and Pinch technology. In this work, the basis of this new technology are presented, besides the potential areas of application in the Mexican energy industry are also considered. So then, it is shown that a combined analysis of exergy and Pinch technology (ACETP) is useful to analyze, in a conceptual and easy to understand way, systems that involve heat and power. The potential areas of application of ACETP are, cryogenic processes, power generation systems and cogeneration systems. (Author)

  12. Analysis of elliptically polarized cavity backed antennas using a combined FEM/MoM/GTD technique

    Science.gov (United States)

    Reddy, C. J.; Deshpande, M. D.; Fralick, D. T.

    1995-01-01

    Radiation pattern prediction analysis of elliptically polarized cavity backed aperture antennas in a finite ground plane is carried out using a combined finite element method (FEM)/method of moments (MoM)/geometrical theory of diffraction (GTD) technique. The magnetic current on the cavity-backed aperture in an infinite ground plane is calculated using the combined FEM/MoM analysis. GTD, including the slope diffraction contribution, is used to calculate the diffracted fields due to both soft and hard polarizations at the edges of the finite ground plane. Numerical results for the radiation patterns of a cavity backed circular spiral microstrip patch antenna excited by a coaxial probe in a finite rectangular ground plane are computed and compared with experimental results.

  13. Evaluation of different methods for combined thermodynamic and optical analysis of combustion in spark ignition engines

    International Nuclear Information System (INIS)

    Highlights: • A combined optical and thermodynamic analysis was performed on a DISI engine. • Accurate correlations of flame area and burned volume fractions were obtained. • Image processing methods had a reduced impact in the initial combustion phase. • Towards the end of flame propagation, entropy thresholding was necessary. • Results of a more complex thermodynamic model were closer to the optical analysis. - Abstract: Studies concerning the combustion analysis in spark ignition engines generally feature measurements of in-cylinder pressure traces and exhaust emissions. Combined thermodynamic and optical investigations can provide significant insight into specific phenomena and a more complete understanding of combustion processes. While the latter category of investigative techniques gives information on local flame and fluid characteristics, measurements of in-cylinder pressure ensure quick and cost competitive analysis of complex processes that take place inside the combustion chamber. By using both methods, valuable correlations between different phenomena can be obtained, thus providing a complete view based on experimental trials. This work aims to evaluate the capacity of different data analysis procedures to deliver accurate and pertinent results on combustion development, as well as the correspondence between the two types of measurements. Three thermodynamic models for in-cylinder pressure analysis and three imaging techniques were compared within each category of investigative methods; results were also evaluated in a combined way in order to assess each procedure. The more complex thermodynamic model that included a heat transfer correlation was found to offer improved accuracy in the initial combustion phase, as compared to other two simpler methods

  14. Dynamic and Static Combination Analysis Method of Slope Stability Analysis during Earthquake

    Directory of Open Access Journals (Sweden)

    Liang Lu

    2014-01-01

    Full Text Available The results of laboratory model tests for simulating the slope failure due to vibration, including unreinforced slope and the slope reinforced by using geotextile, show that the slope failure occurs when a cumulative plastic displacement exceeds a certain critical value. To overcome the defects of conventional stability analysis, which evaluates the slope characteristics only by its strength parameters, a numerical procedure considering the stiffness and deformation of materials and geosynthetics is proposed to evaluate the seismic slope stability. In the proposed procedure, the failure of slope is defined when the cumulative plastic displacement calculated by a dynamic response analysis using actual seismic wave exceeds the critical value of displacement estimated by a static stability analysis considering seismic coefficient. The proposed procedure is applied to the laboratory model tests and an actual failure of slope in earthquake. The case study shows the possibility that the proposed procedure gives the realistic evaluation of seismic slope stability.

  15. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    Science.gov (United States)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  16. The ZEW combined microsimulation-CGE model : innovative tool for applied policy analysis

    OpenAIRE

    Clauss, Markus; Schubert, Stefanie

    2009-01-01

    This contribution describes the linkage of microsimulation models and computable general equilibrium (CGE) models using two already established models called "STSM" and "PACE-L" used by the Centre for European Economic Research. This state of the art research method for applied policy analysis combines the advantages of both model types: On the one hand, microsimulation models allow for detailed labor supply and distributional effects due to policy measures, as individual household data is us...

  17. Combined PIXE and XPS analysis on republican and imperial Roman coins

    International Nuclear Information System (INIS)

    A combined PIXE and XPS analysis has been performed on a few Roman coins of the republican and imperial age. The purpose was to investigate via XPS the nature and extent of patina in order to be capable of extracting PIXE data relative to the coins bulk. The inclusion of elements from the surface layer, altered by oxidation and inclusion, is a known source of uncertainty in PIXE analyses of coins, performed to assess the composition and the provenance

  18. Combined PIXE and XPS analysis on republican and imperial Roman coins

    Science.gov (United States)

    Daccà, A.; Prati, P.; Zucchiatti, A.; Lucarelli, F.; Mandò, P. A.; Gemme, G.; Parodi, R.; Pera, R.

    2000-03-01

    A combined PIXE and XPS analysis has been performed on a few Roman coins of the republican and imperial age. The purpose was to investigate via XPS the nature and extent of patina in order to be capable of extracting PIXE data relative to the coins bulk. The inclusion of elements from the surface layer, altered by oxidation and inclusion, is a known source of uncertainty in PIXE analyses of coins, performed to assess the composition and the provenance.

  19. Measuring selectivity of feeding by estuarine copepods using image analysis combined with microscopic and Coulter counting

    OpenAIRE

    Tackx, M.L.M.; Zhu, L.; De Coster, W.; Billones, R.G.; Daro, M.H.

    1995-01-01

    Although estuarine zooplankters are generally believed to be detritivorous, high clearance rates by the estuarine copepods Eurytemora affinis and Acartia tonsa on natural estuarine microplankton have been reported in the literature. In order to enable detection of possible selectivity for these microplankton organisms over detritus, a method that measures clearance rates on total particulate matter is proposed. Image analysis is used to measure copepod gut contents, and combined with Coulter ...

  20. Combining microsimulation with CGE and macro modelling for distributional analysis in developing and transition countries

    OpenAIRE

    James B. Davies

    2009-01-01

    This paper overviews recent work that has attempted to bring together microsimulation, Computable General Equilibrium (CGE) and macro models to perform distributional analysis in developing and transition countries. Particular attention is paid to applications relating to aspects of economic growth and political economy. Applications in which macro, CGE and microsimulation models are either layered or integrated are considered. It is demonstrated that different combinations of such models, in...