WorldWideScience

Sample records for multi-step density forecasts

  1. Multi-step wind speed forecasting based on a hybrid forecasting architecture and an improved bat algorithm

    International Nuclear Information System (INIS)

    Xiao, Liye; Qian, Feng; Shao, Wei

    2017-01-01

    Highlights: • Propose a hybrid architecture based on a modified bat algorithm for multi-step wind speed forecasting. • Improve the accuracy of multi-step wind speed forecasting. • Modify bat algorithm with CG to improve optimized performance. - Abstract: As one of the most promising sustainable energy sources, wind energy plays an important role in energy development because of its cleanliness without causing pollution. Generally, wind speed forecasting, which has an essential influence on wind power systems, is regarded as a challenging task. Analyses based on single-step wind speed forecasting have been widely used, but their results are insufficient in ensuring the reliability and controllability of wind power systems. In this paper, a new forecasting architecture based on decomposing algorithms and modified neural networks is successfully developed for multi-step wind speed forecasting. Four different hybrid models are contained in this architecture, and to further improve the forecasting performance, a modified bat algorithm (BA) with the conjugate gradient (CG) method is developed to optimize the initial weights between layers and thresholds of the hidden layer of neural networks. To investigate the forecasting abilities of the four models, the wind speed data collected from four different wind power stations in Penglai, China, were used as a case study. The numerical experiments showed that the hybrid model including the singular spectrum analysis and general regression neural network with CG-BA (SSA-CG-BA-GRNN) achieved the most accurate forecasting results in one-step to three-step wind speed forecasting.

  2. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    Science.gov (United States)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  3. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  4. Multi-step ahead forecasts for electricity prices using NARX: A new approach, a critical analysis of one-step ahead forecasts

    International Nuclear Information System (INIS)

    Andalib, Arash; Atry, Farid

    2009-01-01

    The prediction of electricity prices is very important to participants of deregulated markets. Among many properties, a successful prediction tool should be able to capture long-term dependencies in market's historical data. A nonlinear autoregressive model with exogenous inputs (NARX) has proven to enjoy a superior performance to capture such dependencies than other learning machines. However, it is not examined for electricity price forecasting so far. In this paper, we have employed a NARX network for forecasting electricity prices. Our prediction model is then compared with two currently used methods, namely the multivariate adaptive regression splines (MARS) and wavelet neural network. All the models are built on the reconstructed state space of market's historical data, which either improves the results or decreases the complexity of learning algorithms. Here, we also criticize the one-step ahead forecasts for electricity price that may suffer a one-term delay and we explain why the mean square error criterion does not guarantee a functional prediction result in this case. To tackle the problem, we pursue multi-step ahead predictions. Results for the Ontario electricity market are presented

  5. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  6. Particle-hole state densities for statistical multi-step compound reactions

    International Nuclear Information System (INIS)

    Oblozinsky, P.

    1986-01-01

    An analytical relation is derived for the density of particle-hole bound states applying the equidistant-spacing approximation and the Darwin-Fowler statistical method. The Pauli exclusion principle as well as the finite depth of the potential well are taken into account. The set of densities needed for calculations of multi-step compound reactions is completed by deriving the densities of accessible final states for escape and damping. (orig.)

  7. Densities of accessible final states for multi-step compound reactions

    International Nuclear Information System (INIS)

    Maoming De; Guo Hua

    1993-01-01

    The densities of accessible final states for calculations of multi-step compound reactions are derived. The Pauli exclusion principle is taken into account in the calculations. The results are compared with a previous author's results and the effect of the Pauli exclusion principle is investigated. (Author)

  8. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  9. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  10. On density forecast evaluation

    NARCIS (Netherlands)

    Diks, C.

    2008-01-01

    Traditionally, probability integral transforms (PITs) have been popular means for evaluating density forecasts. For an ideal density forecast, the PITs should be uniformly distributed on the unit interval and independent. However, this is only a necessary condition, and not a sufficient one, as

  11. Short-term wind power combined forecasting based on error forecast correction

    International Nuclear Information System (INIS)

    Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng

    2016-01-01

    Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed

  12. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  13. The Probability of Default Under IFRS 9: Multi-period Estimation and Macroeconomic Forecast

    Directory of Open Access Journals (Sweden)

    Tomáš Vaněk

    2017-01-01

    Full Text Available In this paper we propose a straightforward, flexible and intuitive computational framework for the multi-period probability of default estimation incorporating macroeconomic forecasts. The concept is based on Markov models, the estimated economic adjustment coefficient and the official economic forecasts of the Czech National Bank. The economic forecasts are taken into account in a separate step to better distinguish between idiosyncratic and systemic risk. This approach is also attractive from the interpretational point of view. The proposed framework can be used especially when calculating lifetime expected credit losses under IFRS 9.

  14. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    Science.gov (United States)

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  15. Price Density Forecasts in the U.S. Hog Market: Composite Procedures

    NARCIS (Netherlands)

    Trujillo Barrera, A.A.; Garcia, P.; Mallory, M.

    2013-01-01

    Abstract We develop and evaluate quarterly out-of-sample individual and composite density forecasts for U.S. hog prices using data from 1975.I to 2010.IV. Individual forecasts are generated from time series models and the implied distribution of USDA outlook forecasts. Composite density forecasts

  16. Uncertainty Analysis of Multi-Model Flood Forecasts

    Directory of Open Access Journals (Sweden)

    Erich J. Plate

    2015-12-01

    Full Text Available This paper demonstrates, by means of a systematic uncertainty analysis, that the use of outputs from more than one model can significantly improve conditional forecasts of discharges or water stages, provided the models are structurally different. Discharge forecasts from two models and the actual forecasted discharge are assumed to form a three-dimensional joint probability density distribution (jpdf, calibrated on long time series of data. The jpdf is decomposed into conditional probability density distributions (cpdf by means of Bayes formula, as suggested and explored by Krzysztofowicz in a series of papers. In this paper his approach is simplified to optimize conditional forecasts for any set of two forecast models. Its application is demonstrated by means of models developed in a study of flood forecasting for station Stung Treng on the middle reach of the Mekong River in South-East Asia. Four different forecast models were used and pairwise combined: forecast with no model, with persistence model, with a regression model, and with a rainfall-runoff model. Working with cpdfs requires determination of dependency among variables, for which linear regressions are required, as was done by Krzysztofowicz. His Bayesian approach based on transforming observed probability distributions of discharges and forecasts into normal distributions is also explored. Results obtained with his method for normal prior and likelihood distributions are identical to results from direct multiple regressions. Furthermore, it is shown that in the present case forecast accuracy is only marginally improved, if Weibull distributed basic data were converted into normally distributed variables.

  17. Statistical post-processing of seasonal multi-model forecasts: Why is it so hard to beat the multi-model mean?

    Science.gov (United States)

    Siegert, Stefan

    2017-04-01

    Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.

  18. Multi-parametric variational data assimilation for hydrological forecasting

    Science.gov (United States)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  19. Forecasting Long Memory Series Subject to Structural Change

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Papailias, Fotis

    A two-stage forecasting approach for long memory time series is introduced. In the first step we estimate the fractional exponent and, applying the fractional differencing operator, we obtain the underlying weakly dependent series. In the second step, we perform the multi-step ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...... change and yields good forecasting results....

  20. Forecasting energy consumption of multi-family residential buildings using support vector regression: Investigating the impact of temporal and spatial monitoring granularity on performance accuracy

    International Nuclear Information System (INIS)

    Jain, Rishee K.; Smith, Kevin M.; Culligan, Patricia J.; Taylor, John E.

    2014-01-01

    Highlights: • We develop a building energy forecasting model using support vector regression. • Model is applied to data from a multi-family residential building in New York City. • We extend sensor based energy forecasting to multi-family residential buildings. • We examine the impact temporal and spatial granularity has on model accuracy. • Optimal granularity occurs at the by floor in hourly temporal intervals. - Abstract: Buildings are the dominant source of energy consumption and environmental emissions in urban areas. Therefore, the ability to forecast and characterize building energy consumption is vital to implementing urban energy management and efficiency initiatives required to curb emissions. Advances in smart metering technology have enabled researchers to develop “sensor based” approaches to forecast building energy consumption that necessitate less input data than traditional methods. Sensor-based forecasting utilizes machine learning techniques to infer the complex relationships between consumption and influencing variables (e.g., weather, time of day, previous consumption). While sensor-based forecasting has been studied extensively for commercial buildings, there is a paucity of research applying this data-driven approach to the multi-family residential sector. In this paper, we build a sensor-based forecasting model using Support Vector Regression (SVR), a commonly used machine learning technique, and apply it to an empirical data-set from a multi-family residential building in New York City. We expand our study to examine the impact of temporal (i.e., daily, hourly, 10 min intervals) and spatial (i.e., whole building, by floor, by unit) granularity have on the predictive power of our single-step model. Results indicate that sensor based forecasting models can be extended to multi-family residential buildings and that the optimal monitoring granularity occurs at the by floor level in hourly intervals. In addition to implications for

  1. Power density forecasting device for nuclear power plant

    International Nuclear Information System (INIS)

    Fukuzaki, Takaharu; Kiguchi, Takashi.

    1978-01-01

    Purpose: To attain effective reactor operation in a bwr type reactor by forecasting the power density of the reactor after adjustment and comparing the same with the present status of the reactor by the on-line calculation in a short time. Constitution: The present status for the reactor is estimated in a present status decision section based on a measurement signal from the reactor and it is stored in an operation result collection section. The reactor status after the forecasting is estimated in a forecasting section based on a setting signal from a forecasting condition setting section and it is compared with the result value from the operation results collection section. If the forecast value does not coincide with the result value in the above comparison, the setting value in the forecast condition setting section is changed in the control section. The above procedures are repeated so as to minimize the difference between the forecast value and the result value to thereby exactly forecast the reactor status and operate the reactor effectively. (Moriyama, K.)

  2. Enhancing multi-step quantum state tomography by PhaseLift

    Science.gov (United States)

    Lu, Yiping; Zhao, Qing

    2017-09-01

    Multi-photon system has been studied by many groups, however the biggest challenge faced is the number of copies of an unknown state are limited and far from detecting quantum entanglement. The difficulty to prepare copies of the state is even more serious for the quantum state tomography. One possible way to solve this problem is to use adaptive quantum state tomography, which means to get a preliminary density matrix in the first step and revise it in the second step. In order to improve the performance of adaptive quantum state tomography, we develop a new distribution scheme of samples and extend it to three steps, that is to correct it once again based on the density matrix obtained in the traditional adaptive quantum state tomography. Our numerical results show that the mean square error of the reconstructed density matrix by our new method is improved to the level from 10-4 to 10-9 for several tested states. In addition, PhaseLift is also applied to reduce the required storage space of measurement operator.

  3. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  4. Forecasting Exchange Rate Density Using Parametric Models: the Case of Brazil

    Directory of Open Access Journals (Sweden)

    Benjamin Miranda Tabak

    2007-06-01

    Full Text Available This paper employs a recently developed parametric technique to obtain density forecasts for the Brazilian exchange rate, using the exchange rate options market. Empirical results suggest that the option market contains useful information about future exchange rate density. These results suggests that density forecasts using options markets may add value for portfolio and risk management, and may be useful for financial regulators to assess financial stability.

  5. Multi-site solar power forecasting using gradient boosted regression trees

    DEFF Research Database (Denmark)

    Persson, Caroline Stougård; Bacher, Peder; Shiga, Takahiro

    2017-01-01

    The challenges to optimally utilize weather dependent renewable energy sources call for powerful tools for forecasting. This paper presents a non-parametric machine learning approach used for multi-site prediction of solar power generation on a forecast horizon of one to six hours. Historical pow...

  6. Multi-Model Prediction for Demand Forecast in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo Lopez Farias

    2018-03-01

    Full Text Available This paper presents a multi-model predictor called Qualitative Multi-Model Predictor Plus (QMMP+ for demand forecast in water distribution networks. QMMP+ is based on the decomposition of the quantitative and qualitative information of the time-series. The quantitative component (i.e., the daily consumption prediction is forecasted and the pattern mode estimated using a Nearest Neighbor (NN classifier and a Calendar. The patterns are updated via a simple Moving Average scheme. The NN classifier and the Calendar are executed simultaneously every period and the most suited model for prediction is selected using a probabilistic approach. The proposed solution for water demand forecast is compared against Radial Basis Function Artificial Neural Networks (RBF-ANN, the statistical Autoregressive Integrated Moving Average (ARIMA, and Double Seasonal Holt-Winters (DSHW approaches, providing the best results when applied to real demand of the Barcelona Water Distribution Network. QMMP+ has demonstrated that the special modelling treatment of water consumption patterns improves the forecasting accuracy.

  7. Effect of One-Step and Multi-Steps Polishing System on Enamel Roughness

    Directory of Open Access Journals (Sweden)

    Cynthia Sumali

    2013-07-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The final procedures of orthodontic treatment are bracket debonding and cleaning the remaining adhesive. Multi-step polishing system is the most common method used. The disadvantage of that system is long working time, because of the stages that should be done. Therefore, dental material manufacturer make an improvement to the system, to reduce several stages into one stage only. This new system is known as one-step polishing system. Objective: To compare the effect of one-step and multi-step polishing system on enamel roughness after orthodontic bracket debonding. Methods: Randomized control trial was conducted included twenty-eight maxillary premolar randomized into two polishing system; one-step OptraPol (Ivoclar, Vivadent and multi-step AstroPol (Ivoclar, Vivadent. After bracket debonding, the remaining adhesive on each group was cleaned by subjective polishing system for ninety seconds using low speed handpiece. The enamel roughness was subjected to profilometer, registering two roughness parameters (Ra, Rz. Independent t-test was used to analyze the mean score of enamel roughness in each group. Results: There was no significant difference of enamel roughness between one-step and multi-step polishing system (p>0.005. Conclusion: One-step polishing system can produce a similar enamel roughness to multi-step polishing system after bracket debonding and adhesive cleaning.DOI: 10.14693/jdi.v19i3.136

  8. Statistical theory of multi-step compound and direct reactions

    International Nuclear Information System (INIS)

    Feshbach, H.; Kerman, A.; Koonin, S.

    1980-01-01

    The theory of nuclear reactions is extended so as to include a statistical treatment of multi-step processes. Two types are distinguished, the multi-step compound and the multi-step direct. The wave functions for the system are grouped according to their complexity. The multi-step direct process involves explicitly those states which are open, while the multi-step compound involves those which are bound. In addition to the random phase assumption which is applied differently to the multi-step direct and to the multi-step compound cross-sections, it is assumed that the residual interaction will have non-vanishing matrix elements between states whose complexities differ by at most one unit. This is referred to as the chaining hypothesis. Explicit expressions for the double differential cross-section giving the angular distribution and energy spectrum are obtained for both reaction types. The statistical multi-step compound cross-sections are symmetric about 90 0 . The classical statistical theory of nuclear reactions is a special limiting case. The cross-section for the statistical multi-step direct reaction consists of a set of convolutions of single-step direct cross-sections. For the many step case it is possible to derive a diffusion equation in momentum space. Application is made to the reaction 181 Ta(p,n) 181 W using the statistical multi-step compound formalism

  9. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

      The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  10. Ensemble Forecasts with Useful Skill-Spread Relationships for African meningitis and Asia Streamflow Forecasting

    Science.gov (United States)

    Hopson, T. M.

    2014-12-01

    One potential benefit of an ensemble prediction system (EPS) is its capacity to forecast its own forecast error through the ensemble spread-error relationship. In practice, an EPS is often quite limited in its ability to represent the variable expectation of forecast error through the variable dispersion of the ensemble, and perhaps more fundamentally, in its ability to provide enough variability in the ensembles dispersion to make the skill-spread relationship even potentially useful (irrespective of whether the EPS is well-calibrated or not). In this paper we examine the ensemble skill-spread relationship of an ensemble constructed from the TIGGE (THORPEX Interactive Grand Global Ensemble) dataset of global forecasts and a combination of multi-model and post-processing approaches. Both of the multi-model and post-processing techniques are based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. The methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. A context for these concepts is provided by assessing the constructed ensemble in forecasting district-level humidity impacting the incidence of meningitis in the meningitis belt of Africa, and in forecasting flooding events in the Brahmaputra and Ganges basins of South Asia.

  11. A note on the multi model super ensemble technique for reducing forecast errors

    International Nuclear Information System (INIS)

    Kantha, L.; Carniel, S.; Sclavo, M.

    2008-01-01

    The multi model super ensemble (S E) technique has been used with considerable success to improve meteorological forecasts and is now being applied to ocean models. Although the technique has been shown to produce deterministic forecasts that can be superior to the individual models in the ensemble or a simple multi model ensemble forecast, there is a clear need to understand its strengths and limitations. This paper is an attempt to do so in simple, easily understood contexts. The results demonstrate that the S E forecast is almost always better than the simple ensemble forecast, the degree of improvement depending on the properties of the models in the ensemble. However, the skill of the S E forecast with respect to the true forecast depends on a number of factors, principal among which is the skill of the models in the ensemble. As can be expected, if the ensemble consists of models with poor skill, the S E forecast will also be poor, although better than the ensemble forecast. On the other hand, the inclusion of even a single skillful model in the ensemble increases the forecast skill significantly.

  12. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  13. How uncertain are day-ahead wind forecasts?

    Energy Technology Data Exchange (ETDEWEB)

    Grimit, E. [3TIER Environmental Forecast Group, Seattle, WA (United States)

    2006-07-01

    Recent advances in the combination of weather forecast ensembles with Bayesian statistical techniques have helped to address uncertainties in wind forecasting. Weather forecast ensembles are a collection of numerical weather predictions. The combination of several equally-skilled forecasts typically results in a consensus forecast with greater accuracy. The distribution of forecasts also provides an estimate of forecast inaccuracy. However, weather forecast ensembles tend to be under-dispersive, and not all forecast uncertainties can be taken into account. In order to address these issues, a multi-variate linear regression approach was used to correct the forecast bias for each ensemble member separately. Bayesian model averaging was used to provide a predictive probability density function to allow for multi-modal probability distributions. A test location in eastern Canada was used to demonstrate the approach. Results of the test showed that the method improved wind forecasts and generated reliable prediction intervals. Prediction intervals were much shorter than comparable intervals based on a single forecast or on historical observations alone. It was concluded that the approach will provide economic benefits to both wind energy developers and investors. refs., tabs., figs.

  14. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  15. Multi-model forecast skill for mid-summer rainfall over southern Africa

    CSIR Research Space (South Africa)

    Landman, WA

    2012-02-01

    Full Text Available -model forecasts outperform the single 17 model forecasts, that the two multi-model schemes produce about equally skilful 18 forecasts, and that the forecasts perform better during El Ni?o and La Ni?a 19 seasons than during neutral years. 20 21 22 3 1... to be 19 anomalously dry during El Ni?o years and anomalously wet during La Ni?a years, 20 although wet El Ni?o seasons and dry La Ni?a seasons are not uncommon. 21 Indian and Atlantic Ocean SST also have a statistically detectable influence on 22 South...

  16. Forecasting the density of oil futures returns using model-free implied volatility and high-frequency data

    International Nuclear Information System (INIS)

    Ielpo, Florian; Sevi, Benoit

    2013-09-01

    Forecasting the density of returns is useful for many purposes in finance, such as risk management activities, portfolio choice or derivative security pricing. Existing methods to forecast the density of returns either use prices of the asset of interest or option prices on this same asset. The latter method needs to convert the risk-neutral estimate of the density into a physical measure, which is computationally cumbersome. In this paper, we take the view of a practitioner who observes the implied volatility under the form of an index, namely the recent OVX, to forecast the density of oil futures returns for horizons going from 1 to 60 days. Using the recent methodology in Maheu and McCurdy (2011) to compute density predictions, we compare the performance of time series models using implied volatility and either daily or intra-daily futures prices. Our results indicate that models based on implied volatility deliver significantly better density forecasts at all horizons, which is in line with numerous studies delivering the same evidence for volatility point forecast. (authors)

  17. Two-Step Forecast of Geomagnetic Storm Using Coronal Mass Ejection and Solar Wind Condition

    Science.gov (United States)

    Kim, R.-S.; Moon, Y.-J.; Gopalswamy, N.; Park, Y.-D.; Kim, Y.-H.

    2014-01-01

    To forecast geomagnetic storms, we had examined initially observed parameters of coronal mass ejections (CMEs) and introduced an empirical storm forecast model in a previous study. Now we suggest a two-step forecast considering not only CME parameters observed in the solar vicinity but also solar wind conditions near Earth to improve the forecast capability. We consider the empirical solar wind criteria derived in this study (Bz = -5 nT or Ey = 3 mV/m for t = 2 h for moderate storms with minimum Dst less than -50 nT) (i.e. Magnetic Field Magnitude, B (sub z) less than or equal to -5 nanoTeslas or duskward Electrical Field, E (sub y) greater than or equal to 3 millivolts per meter for time greater than or equal to 2 hours for moderate storms with Minimum Disturbance Storm Time, Dst less than -50 nanoTeslas) and a Dst model developed by Temerin and Li (2002, 2006) (TL [i.e. Temerin Li] model). Using 55 CME-Dst pairs during 1997 to 2003, our solar wind criteria produce slightly better forecasts for 31 storm events (90 percent) than the forecasts based on the TL model (87 percent). However, the latter produces better forecasts for 24 nonstorm events (88 percent), while the former correctly forecasts only 71 percent of them. We then performed the two-step forecast. The results are as follows: (i) for 15 events that are incorrectly forecasted using CME parameters, 12 cases (80 percent) can be properly predicted based on solar wind conditions; (ii) if we forecast a storm when both CME and solar wind conditions are satisfied (n, i.e. cap operator - the intersection set that is comprised of all the elements that are common to both), the critical success index becomes higher than that from the forecast using CME parameters alone, however, only 25 storm events (81 percent) are correctly forecasted; and (iii) if we forecast a storm when either set of these conditions is satisfied (?, i.e. cup operator - the union set that is comprised of all the elements of either or both

  18. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    International Nuclear Information System (INIS)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-01-01

    Highlights: • An ensemble model is developed to produce both deterministic and probabilistic wind forecasts. • A deep feature selection framework is developed to optimally determine the inputs to the forecasting methodology. • The developed ensemble methodology has improved the forecasting accuracy by up to 30%. - Abstract: With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  19. Determining effective forecast horizons for multi-purpose reservoirs with short- and long-term operating objectives

    Science.gov (United States)

    Luchner, Jakob; Anghileri, Daniela; Castelletti, Andrea

    2017-04-01

    Real-time control of multi-purpose reservoirs can benefit significantly from hydro-meteorological forecast products. Because of their reliability, the most used forecasts range on time scales from hours to few days and are suitable for short-term operation targets such as flood control. In recent years, hydro-meteorological forecasts have become more accurate and reliable on longer time scales, which are more relevant to long-term reservoir operation targets such as water supply. While the forecast quality of such products has been studied extensively, the forecast value, i.e. the operational effectiveness of using forecasts to support water management, has been only relatively explored. It is comparatively easy to identify the most effective forecasting information needed to design reservoir operation rules for flood control but it is not straightforward to identify which forecast variable and lead time is needed to define effective hedging rules for operational targets with slow dynamics such as water supply. The task is even more complex when multiple targets, with diverse slow and fast dynamics, are considered at the same time. In these cases, the relative importance of different pieces of information, e.g. magnitude and timing of peak flow rate and accumulated inflow on different time lags, may vary depending on the season or the hydrological conditions. In this work, we analyze the relationship between operational forecast value and streamflow forecast horizon for different multi-purpose reservoir trade-offs. We use the Information Selection and Assessment (ISA) framework to identify the most effective forecast variables and horizons for informing multi-objective reservoir operation over short- and long-term temporal scales. The ISA framework is an automatic iterative procedure to discriminate the information with the highest potential to improve multi-objective reservoir operating performance. Forecast variables and horizons are selected using a feature

  20. Randomness in multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    The authors propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. They present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplification of the leading-particle statistics theory

  1. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  2. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...

  3. Stochastic rainfall-runoff forecasting: parameter estimation, multi-step prediction, and evaluation of overflow risk

    DEFF Research Database (Denmark)

    Löwe, Roland; Mikkelsen, Peter Steen; Madsen, Henrik

    2014-01-01

    Probabilistic runoff forecasts generated by stochastic greybox models can be notably useful for the improvement of the decision-making process in real-time control setups for urban drainage systems because the prediction risk relationships in these systems are often highly nonlinear. To date...... the identification of models for cases with noisy in-sewer observations. For the prediction of the overflow risk, no improvement was demonstrated through the application of stochastic forecasts instead of point predictions, although this result is thought to be caused by the notably simplified setup used...

  4. The statistics of multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    We propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. We present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplifications of the leading-particle statistics theory. A more comprehensive exposition will appear before long. (author). 32 refs, 4 figs

  5. Daily Reservoir Inflow Forecasting using Deep Learning with Downscaled Multi-General Circulation Models (GCMs) Platform

    Science.gov (United States)

    Li, D.; Fang, N. Z.

    2017-12-01

    Dallas-Fort Worth Metroplex (DFW) has a population of over 7 million depending on many water supply reservoirs. The reservoir inflow plays a vital role in water supply decision making process and long-term strategic planning for the region. This paper demonstrates a method of utilizing deep learning algorithms and multi-general circulation model (GCM) platform to forecast reservoir inflow for three reservoirs within the DFW: Eagle Mountain Lake, Lake Benbrook and Lake Arlington. Ensemble empirical mode decomposition was firstly employed to extract the features, which were then represented by the deep belief networks (DBNs). The first 75 years of the historical data (1940 -2015) were used to train the model, while the last 2 years of the data (2016-2017) were used for the model validation. The weights of each DBN gained from the training process were then applied to establish a neural network (NN) that was able to forecast reservoir inflow. Feature predictors used for the forecasting model were generated from weather forecast results of the downscaled multi-GCM platform for the North Texas region. By comparing root mean square error (RMSE) and mean bias error (MBE) with the observed data, the authors found that the deep learning with downscaled multi-GCM platform is an effective approach in the reservoir inflow forecasting.

  6. Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment

    Science.gov (United States)

    Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.

    2016-01-01

    Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447

  7. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  8. Medium-range reference evapotranspiration forecasts for the contiguous United States based on multi-model numerical weather predictions

    Science.gov (United States)

    Medina, Hanoi; Tian, Di; Srivastava, Puneet; Pelosi, Anna; Chirico, Giovanni B.

    2018-07-01

    Reference evapotranspiration (ET0) plays a fundamental role in agronomic, forestry, and water resources management. Estimating and forecasting ET0 have long been recognized as a major challenge for researchers and practitioners in these communities. This work explored the potential of multiple leading numerical weather predictions (NWPs) for estimating and forecasting summer ET0 at 101 U.S. Regional Climate Reference Network stations over nine climate regions across the contiguous United States (CONUS). Three leading global NWP model forecasts from THORPEX Interactive Grand Global Ensemble (TIGGE) dataset were used in this study, including the single model ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (EC), the National Centers for Environmental Prediction Global Forecast System (NCEP), and the United Kingdom Meteorological Office forecasts (MO), as well as multi-model ensemble forecasts from the combinations of these NWP models. A regression calibration was employed to bias correct the ET0 forecasts. Impact of individual forecast variables on ET0 forecasts were also evaluated. The results showed that the EC forecasts provided the least error and highest skill and reliability, followed by the MO and NCEP forecasts. The multi-model ensembles constructed from the combination of EC and MO forecasts provided slightly better performance than the single model EC forecasts. The regression process greatly improved ET0 forecast performances, particularly for the regions involving stations near the coast, or with a complex orography. The performance of EC forecasts was only slightly influenced by the size of the ensemble members, particularly at short lead times. Even with less ensemble members, EC still performed better than the other two NWPs. Errors in the radiation forecasts, followed by those in the wind, had the most detrimental effects on the ET0 forecast performances.

  9. Forecasting electricity market pricing using artificial neural networks

    International Nuclear Information System (INIS)

    Pao, Hsiao-Tien

    2007-01-01

    Electricity price forecasting is extremely important for all market players, in particular for generating companies: in the short term, they must set up bids for the spot market; in the medium term, they have to define contract policies; and in the long term, they must define their expansion plans. For forecasting long-term electricity market pricing, in order to avoid excessive round-off and prediction errors, this paper proposes a new artificial neural network (ANN) with single output node structure by using direct forecasting approach. The potentials of ANNs are investigated by employing a rolling cross validation scheme. Out of sample performance evaluated with three criteria across five forecasting horizons shows that the proposed ANNs are a more robust multi-step ahead forecasting method than autoregressive error models. Moreover, ANN predictions are quite accurate even when the length of the forecast horizon is relatively short or long

  10. Combining Step Gradients and Linear Gradients in Density.

    Science.gov (United States)

    Kumar, Ashok A; Walz, Jenna A; Gonidec, Mathieu; Mace, Charles R; Whitesides, George M

    2015-06-16

    Combining aqueous multiphase systems (AMPS) and magnetic levitation (MagLev) provides a method to produce hybrid gradients in apparent density. AMPS—solutions of different polymers, salts, or surfactants that spontaneously separate into immiscible but predominantly aqueous phases—offer thermodynamically stable steps in density that can be tuned by the concentration of solutes. MagLev—the levitation of diamagnetic objects in a paramagnetic fluid within a magnetic field gradient—can be arranged to provide a near-linear gradient in effective density where the height of a levitating object above the surface of the magnet corresponds to its density; the strength of the gradient in effective density can be tuned by the choice of paramagnetic salt and its concentrations and by the strength and gradient in the magnetic field. Including paramagnetic salts (e.g., MnSO4 or MnCl2) in AMPS, and placing them in a magnetic field gradient, enables their use as media for MagLev. The potential to create large steps in density with AMPS allows separations of objects across a range of densities. The gradients produced by MagLev provide resolution over a continuous range of densities. By combining these approaches, mixtures of objects with large differences in density can be separated and analyzed simultaneously. Using MagLev to add an effective gradient in density also enables tuning the range of densities captured at an interface of an AMPS by simply changing the position of the container in the magnetic field. Further, by creating AMPS in which phases have different concentrations of paramagnetic ions, the phases can provide different resolutions in density. These results suggest that combining steps in density with gradients in density can enable new classes of separations based on density.

  11. Deformation dependent TUL multi-step direct model

    International Nuclear Information System (INIS)

    Wienke, H.; Capote, R.; Herman, M.; Sin, M.

    2008-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, 'deformed' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the 'spherical' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations. (authors)

  12. The Research of Regression Method for Forecasting Monthly Electricity Sales Considering Coupled Multi-factor

    Science.gov (United States)

    Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui

    2018-01-01

    The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.

  13. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio

    2013-02-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models with several interrelated variables to be forecasted simultaneously. These models are known as multi-dimensional Bayesian network classifiers (MDBNs). Pre-processing steps are critical for the posterior learning of the model in these kinds of domains. Therefore, in the present study, a set of \\'state-of-the-art\\' uni-dimensional pre-processing methods, within the categories of missing data imputation, feature discretization and feature subset selection, are adapted to be used with MDBNs. A framework that includes the proposed multi-dimensional supervised pre-processing methods, coupled with a MDBN classifier, is tested with synthetic datasets and the real domain of fish recruitment forecasting. The correctly forecasting of three fish species (anchovy, sardine and hake) simultaneously is doubled (from 17.3% to 29.5%) using the multi-dimensional approach in comparison to mono-species models. The probability assessments also show high improvement reducing the average error (estimated by means of Brier score) from 0.35 to 0.27. Finally, these differences are superior to the forecasting of species by pairs. © 2012 Elsevier Ltd.

  14. AN EVALUATION OF POINT AND DENSITY FORECASTS FOR SELECTED EU FARM GATE MILK PRICES

    Directory of Open Access Journals (Sweden)

    Dennis Bergmann

    2018-01-01

    Full Text Available Fundamental changes to the common agricultural policy (CAP have led to greater market orientation which in turn has resulted in sharply increased variability of EU farm gate milk prices and thus farmers’ income. In this market environment reliable forecasts of farm gate milk prices are extremely important as farmers can make improved decisions with regards to cash flow management and budget preparation. In addition these forecasts may be used in setting fixed priced contracts between dairy farmers and processors thus providing certainty and reducing risk. In this study both point and density forecasts from various time series models for farm gate milk prices in Germany, Ireland and for an average EU price series are evaluated using a rolling window framework. Additionally forecasts of the individual models are combined using different combination schemes. The results of the out of sample evaluation show that ARIMA type models perform well on short forecast horizons (1 to 3 month while the structural time series approach performs well on longer forecast horizons (12 month. Finally combining individual forecasts of different models significantly improves the forecast performance for all forecast horizons.

  15. DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL

    International Nuclear Information System (INIS)

    WIENKE, H.; CAPOTE, R.; HERMAN, M.; SIN, M.

    2007-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations

  16. A Multi-Classification Method of Improved SVM-based Information Fusion for Traffic Parameters Forecasting

    Directory of Open Access Journals (Sweden)

    Hongzhuan Zhao

    2016-04-01

    Full Text Available With the enrichment of perception methods, modern transportation system has many physical objects whose states are influenced by many information factors so that it is a typical Cyber-Physical System (CPS. Thus, the traffic information is generally multi-sourced, heterogeneous and hierarchical. Existing research results show that the multisourced traffic information through accurate classification in the process of information fusion can achieve better parameters forecasting performance. For solving the problem of traffic information accurate classification, via analysing the characteristics of the multi-sourced traffic information and using redefined binary tree to overcome the shortcomings of the original Support Vector Machine (SVM classification in information fusion, a multi-classification method using improved SVM in information fusion for traffic parameters forecasting is proposed. The experiment was conducted to examine the performance of the proposed scheme, and the results reveal that the method can get more accurate and practical outcomes.

  17. Multi step FRET among three laser dyes Pyrene, Acriflavine and Rhodamine B

    International Nuclear Information System (INIS)

    Saha, Jaba; Dey, Dibyendu; Roy, Arpan Datta; Bhattacharjee, D.; Hussain, Syed Arshad

    2016-01-01

    Fluorescence Resonance Energy Transfer (FRET) system using three dyes has been demonstrated. It has been observed that multi step energy transfer occurred from Pyrene to Rhodamine B via Acriflavine. Here Acriflavine acts as an antenna to receive energy from Pyrene and transfer the same to Rhodamine B. This multi step FRET system is advantageous compared to the conventional FRET as this can be used to study molecular level interaction beyond conventional FRET distance (1–10 nm) as well as studying multi-branched macromolecules. The introduction of clay enhances the FRET efficiencies among the dye pair, which is an advantage to make the multi step system more useful. Similar approach can be used for increasing FRET efficiencies by using other dyes. - Highlights: • Multi-step FRET occurred from Pyrene (Py) to Rhodamine B (RhB) via Acriflavine (Acf). • Acf acts as an antenna to receive energy from Py and to transfer energy to RhB. • Multi-step FRET can be used to study molecular level interaction beyond 1–10 nm. • Incorporation of nanoclay laponite enhances the energy transfer efficiency.

  18. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-01-01

    KF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same

  19. An application of ensemble/multi model approach for wind power production forecasting

    Science.gov (United States)

    Alessandrini, S.; Pinson, P.; Hagedorn, R.; Decimi, G.; Sperati, S.

    2011-02-01

    The wind power forecasts of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast applied in this study is based on meteorological models that provide the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. For this purpose a training of a Neural Network (NN) to link directly the forecasted meteorological data and the power data has been performed. One wind farm has been examined located in a mountain area in the south of Italy (Sicily). First we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by the combination of models (RAMS, ECMWF deterministic, LAMI). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error (normalized by nominal power) of at least 1% compared to the singles models approach. Finally we have focused on the possibility of using the ensemble model system (EPS by ECMWF) to estimate the hourly, three days ahead, power forecast accuracy. Contingency diagram between RMSE of the deterministic power forecast and the ensemble members spread of wind forecast have been produced. From this first analysis it seems that ensemble spread could be used as an indicator of the forecast's accuracy at least for the first three days ahead period.

  20. Improved perovskite phototransistor prepared using multi-step annealing method

    Science.gov (United States)

    Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan

    2018-02-01

    Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.

  1. Application Of Multi-grid Method On China Seas' Temperature Forecast

    Science.gov (United States)

    Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.

    2006-12-01

    Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.

  2. Multi-step wrought processing of TiAl-based alloys

    International Nuclear Information System (INIS)

    Fuchs, G.E.

    1997-04-01

    Wrought processing will likely be needed for fabrication of a variety of TiAl-based alloy structural components. Laboratory and development work has usually relied on one-step forging to produce test material. Attempts to scale-up TiAl-based alloy processing has indicated that multi-step wrought processing is necessary. The purpose of this study was to examine potential multi-step processing routes, such as two-step isothermal forging and extrusion + isothermal forging. The effects of processing (I/M versus P/M), intermediate recrystallization heat treatments and processing route on the tensile and creep properties of Ti-48Al-2Nb-2Cr alloys were examined. The results of the testing were then compared to samples from the same heats of materials processed by one-step routes. Finally, by evaluating the effect of processing on microstructure and properties, optimized and potentially lower cost processing routes could be identified

  3. Forecasting Model for IPTV Service in Korea Using Bootstrap Ridge Regression Analysis

    Science.gov (United States)

    Lee, Byoung Chul; Kee, Seho; Kim, Jae Bum; Kim, Yun Bae

    The telecom firms in Korea are taking new step to prepare for the next generation of convergence services, IPTV. In this paper we described our analysis on the effective method for demand forecasting about IPTV broadcasting. We have tried according to 3 types of scenarios based on some aspects of IPTV potential market and made a comparison among the results. The forecasting method used in this paper is the multi generation substitution model with bootstrap ridge regression analysis.

  4. Daily rainfall forecasting for one year in a single run using Singular Spectrum Analysis

    Science.gov (United States)

    Unnikrishnan, Poornima; Jothiprakash, V.

    2018-06-01

    Effective modelling and prediction of smaller time step rainfall is reported to be very difficult owing to its highly erratic nature. Accurate forecast of daily rainfall for longer duration (multi time step) may be exceptionally helpful in the efficient planning and management of water resources systems. Identification of inherent patterns in a rainfall time series is also important for an effective water resources planning and management system. In the present study, Singular Spectrum Analysis (SSA) is utilized to forecast the daily rainfall time series pertaining to Koyna watershed in Maharashtra, India, for 365 days after extracting various components of the rainfall time series such as trend, periodic component, noise and cyclic component. In order to forecast the time series for longer time step (365 days-one window length), the signal and noise components of the time series are forecasted separately and then added together. The results of the study show that the method of SSA could extract the various components of the time series effectively and could also forecast the daily rainfall time series for longer duration such as one year in a single run with reasonable accuracy.

  5. Probabilistic Wind Power Forecasting with Hybrid Artificial Neural Networks

    DEFF Research Database (Denmark)

    Wan, Can; Song, Yonghua; Xu, Zhao

    2016-01-01

    probabilities of prediction errors provide an alternative yet effective solution. This article proposes a hybrid artificial neural network approach to generate prediction intervals of wind power. An extreme learning machine is applied to conduct point prediction of wind power and estimate model uncertainties...... via a bootstrap technique. Subsequently, the maximum likelihood estimation method is employed to construct a distinct neural network to estimate the noise variance of forecasting results. The proposed approach has been tested on multi-step forecasting of high-resolution (10-min) wind power using...... actual wind power data from Denmark. The numerical results demonstrate that the proposed hybrid artificial neural network approach is effective and efficient for probabilistic forecasting of wind power and has high potential in practical applications....

  6. Multi-platform operational validation of the Western Mediterranean SOCIB forecasting system

    Science.gov (United States)

    Juza, Mélanie; Mourre, Baptiste; Renault, Lionel; Tintoré, Joaquin

    2014-05-01

    The development of science-based ocean forecasting systems at global, regional, and local scales can support a better management of the marine environment (maritime security, environmental and resources protection, maritime and commercial operations, tourism, ...). In this context, SOCIB (the Balearic Islands Coastal Observing and Forecasting System, www.socib.es) has developed an operational ocean forecasting system in the Western Mediterranean Sea (WMOP). WMOP uses a regional configuration of the Regional Ocean Modelling System (ROMS, Shchepetkin and McWilliams, 2005) nested in the larger scale Mediterranean Forecasting System (MFS) with a spatial resolution of 1.5-2km. WMOP aims at reproducing both the basin-scale ocean circulation and the mesoscale variability which is known to play a crucial role due to its strong interaction with the large scale circulation in this region. An operational validation system has been developed to systematically assess the model outputs at daily, monthly and seasonal time scales. Multi-platform observations are used for this validation, including satellite products (Sea Surface Temperature, Sea Level Anomaly), in situ measurements (from gliders, Argo floats, drifters and fixed moorings) and High-Frequency radar data. The validation procedures allow to monitor and certify the general realism of the daily production of the ocean forecasting system before its distribution to users. Additionally, different indicators (Sea Surface Temperature and Salinity, Eddy Kinetic Energy, Mixed Layer Depth, Heat Content, transports in key sections) are computed every day both at the basin-scale and in several sub-regions (Alboran Sea, Balearic Sea, Gulf of Lion). The daily forecasts, validation diagnostics and indicators from the operational model over the last months are available at www.socib.es.

  7. Least square regression based integrated multi-parameteric demand modeling for short term load forecasting

    International Nuclear Information System (INIS)

    Halepoto, I.A.; Uqaili, M.A.

    2014-01-01

    Nowadays, due to power crisis, electricity demand forecasting is deemed an important area for socioeconomic development and proper anticipation of the load forecasting is considered essential step towards efficient power system operation, scheduling and planning. In this paper, we present STLF (Short Term Load Forecasting) using multiple regression techniques (i.e. linear, multiple linear, quadratic and exponential) by considering hour by hour load model based on specific targeted day approach with temperature variant parameter. The proposed work forecasts the future load demand correlation with linear and non-linear parameters (i.e. considering temperature in our case) through different regression approaches. The overall load forecasting error is 2.98% which is very much acceptable. From proposed regression techniques, Quadratic Regression technique performs better compared to than other techniques because it can optimally fit broad range of functions and data sets. The work proposed in this paper, will pave a path to effectively forecast the specific day load with multiple variance factors in a way that optimal accuracy can be maintained. (author)

  8. Oil prices. Brownian motion or mean reversion? A study using a one year ahead density forecast criterion

    International Nuclear Information System (INIS)

    Meade, Nigel

    2010-01-01

    For oil related investment appraisal, an accurate description of the evolving uncertainty in the oil price is essential. For example, when using real option theory to value an investment, a density function for the future price of oil is central to the option valuation. The literature on oil pricing offers two views. The arbitrage pricing theory literature for oil suggests geometric Brownian motion and mean reversion models. Empirically driven literature suggests ARMA-GARCH models. In addition to reflecting the volatility of the market, the density function of future prices should also incorporate the uncertainty due to price jumps, a common occurrence in the oil market. In this study, the accuracy of density forecasts for up to a year ahead is the major criterion for a comparison of a range of models of oil price behaviour, both those proposed in the literature and following from data analysis. The Kullbach Leibler information criterion is used to measure the accuracy of density forecasts. Using two crude oil price series, Brent and West Texas Intermediate (WTI) representing the US market, we demonstrate that accurate density forecasts are achievable for up to nearly two years ahead using a mixture of two Gaussians innovation processes with GARCH and no mean reversion. (author)

  9. Oil prices. Brownian motion or mean reversion? A study using a one year ahead density forecast criterion

    Energy Technology Data Exchange (ETDEWEB)

    Meade, Nigel [Imperial College, Business School London (United Kingdom)

    2010-11-15

    For oil related investment appraisal, an accurate description of the evolving uncertainty in the oil price is essential. For example, when using real option theory to value an investment, a density function for the future price of oil is central to the option valuation. The literature on oil pricing offers two views. The arbitrage pricing theory literature for oil suggests geometric Brownian motion and mean reversion models. Empirically driven literature suggests ARMA-GARCH models. In addition to reflecting the volatility of the market, the density function of future prices should also incorporate the uncertainty due to price jumps, a common occurrence in the oil market. In this study, the accuracy of density forecasts for up to a year ahead is the major criterion for a comparison of a range of models of oil price behaviour, both those proposed in the literature and following from data analysis. The Kullbach Leibler information criterion is used to measure the accuracy of density forecasts. Using two crude oil price series, Brent and West Texas Intermediate (WTI) representing the US market, we demonstrate that accurate density forecasts are achievable for up to nearly two years ahead using a mixture of two Gaussians innovation processes with GARCH and no mean reversion. (author)

  10. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  11. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  12. A multi-scale relevance vector regression approach for daily urban water demand forecasting

    Science.gov (United States)

    Bai, Yun; Wang, Pu; Li, Chuan; Xie, Jingjing; Wang, Yin

    2014-09-01

    Water is one of the most important resources for economic and social developments. Daily water demand forecasting is an effective measure for scheduling urban water facilities. This work proposes a multi-scale relevance vector regression (MSRVR) approach to forecast daily urban water demand. The approach uses the stationary wavelet transform to decompose historical time series of daily water supplies into different scales. At each scale, the wavelet coefficients are used to train a machine-learning model using the relevance vector regression (RVR) method. The estimated coefficients of the RVR outputs for all of the scales are employed to reconstruct the forecasting result through the inverse wavelet transform. To better facilitate the MSRVR forecasting, the chaos features of the daily water supply series are analyzed to determine the input variables of the RVR model. In addition, an adaptive chaos particle swarm optimization algorithm is used to find the optimal combination of the RVR model parameters. The MSRVR approach is evaluated using real data collected from two waterworks and is compared with recently reported methods. The results show that the proposed MSRVR method can forecast daily urban water demand much more precisely in terms of the normalized root-mean-square error, correlation coefficient, and mean absolute percentage error criteria.

  13. Impact of a high density GPS network on the operational forecast

    Directory of Open Access Journals (Sweden)

    C. Faccani

    2005-01-01

    Full Text Available Global Positioning System Zenith Total Delay (GPS ZTD can provide information about the water vapour in atmosphere. Its assimilation into the analysis used to initialize a model can then improve the weather forecast, giving the right amount of moisture and reducing the model spinup. In the last year, an high density GPS network has been created on the Basilicata region (south of Italy by the Italian Space Agency in the framework of a national project named MAGIC2. MAGIC2 is the Italian follow on of the EC project MAGIC has. Daily operational data assimilation experiments are performed since December 2003. The results show that the assimilation of GPS ZTD improves the forecast especially during the transition from winter to spring even if a no very high model resolution (9km is used.

  14. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  15. Short-Term Wind Electric Power Forecasting Using a Novel Multi-Stage Intelligent Algorithm

    Directory of Open Access Journals (Sweden)

    Haoran Zhao

    2018-03-01

    Full Text Available As the most efficient renewable energy source for generating electricity in a modern electricity network, wind power has the potential to realize sustainable energy supply. However, owing to its random and intermittent instincts, a high permeability of wind power into a power network demands accurate and effective wind energy prediction models. This study proposes a multi-stage intelligent algorithm for wind electric power prediction, which combines the Beveridge–Nelson (B-N decomposition approach, the Least Square Support Vector Machine (LSSVM, and a newly proposed intelligent optimization approach called the Grasshopper Optimization Algorithm (GOA. For data preprocessing, the B-N decomposition approach was employed to disintegrate the hourly wind electric power data into a deterministic trend, a cyclic term, and a random component. Then, the LSSVM optimized by the GOA (denoted GOA-LSSVM was applied to forecast the future 168 h of the deterministic trend, the cyclic term, and the stochastic component, respectively. Finally, the future hourly wind electric power values can be obtained by multiplying the forecasted values of these three trends. Through comparing the forecasting performance of this proposed method with the LSSVM, the LSSVM optimized by the Fruit-fly Optimization Algorithm (FOA-LSSVM, and the LSSVM optimized by Particle Swarm Optimization (PSO-LSSVM, it is verified that the established multi-stage approach is superior to other models and can increase the precision of wind electric power prediction effectively.

  16. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    Science.gov (United States)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a

  17. An application of ensemble/multi model approach for wind power production forecast.

    Science.gov (United States)

    Alessandrini, S.; Decimi, G.; Hagedorn, R.; Sperati, S.

    2010-09-01

    The wind power forecast of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast is based on a mesoscale meteorological models that provides the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. The corrected wind data are then used as input in the wind farm power curve to obtain the power forecast. These computations require historical time series of wind measured data (by an anemometer located in the wind farm or on the nacelle) and power data in order to be able to perform the statistical analysis on the past. For this purpose a Neural Network (NN) is trained on the past data and then applied in the forecast task. Considering that the anemometer measurements are not always available in a wind farm a different approach has also been adopted. A training of the NN to link directly the forecasted meteorological data and the power data has also been performed. The normalized RMSE forecast error seems to be lower in most cases by following the second approach. We have examined two wind farms, one located in Denmark on flat terrain and one located in a mountain area in the south of Italy (Sicily). In both cases we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by using two or more models (RAMS, ECMWF deterministic, LAMI, HIRLAM). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error of at least 1% compared to the singles models approach. Moreover the use of a deterministic global model, (e.g. ECMWF deterministic

  18. A Spatiotemporal Multi-View-Based Learning Method for Short-Term Traffic Forecasting

    Directory of Open Access Journals (Sweden)

    Shifen Cheng

    2018-06-01

    Full Text Available Short-term traffic forecasting plays an important part in intelligent transportation systems. Spatiotemporal k-nearest neighbor models (ST-KNNs have been widely adopted for short-term traffic forecasting in which spatiotemporal matrices are constructed to describe traffic conditions. The performance of the models is closely related to the spatial dependencies, the temporal dependencies, and the interaction of spatiotemporal dependencies. However, these models use distance functions and correlation coefficients to identify spatial neighbors and measure the temporal interaction by only considering the temporal closeness of traffic, which result in existing ST-KNNs that cannot fully reflect the essential features of road traffic. This study proposes an improved spatiotemporal k-nearest neighbor model for short-term traffic forecasting by utilizing a multi-view learning algorithm named MVL-STKNN that fully considers the spatiotemporal dependencies of traffic data. First, the spatial neighbors for each road segment are automatically determined using cross-correlation under different temporal dependencies. Three spatiotemporal views are built on the constructed spatiotemporal closeness, periodic, and trend matrices to represent spatially heterogeneous traffic states. Second, a spatiotemporal weighting matrix is introduced into the ST-KNN model to recognize similar traffic patterns in the three spatiotemporal views. Finally, the results of traffic pattern recognition under these three spatiotemporal views are aggregated by using a neural network algorithm to describe the interaction of spatiotemporal dependencies. Extensive experiments were conducted using real vehicular-speed datasets collected on city roads and expressways. In comparison with baseline methods, the results show that the MVL-STKNN model greatly improves short-term traffic forecasting by lowering the mean absolute percentage error between 28.24% and 46.86% for the city road dataset and

  19. Short-term wind power forecasting: probabilistic and space-time aspects

    DEFF Research Database (Denmark)

    Tastu, Julija

    work deals with the proposal and evaluation of new mathematical models and forecasting methods for short-term wind power forecasting, accounting for space-time dynamics based on geographically distributed information. Different forms of power predictions are considered, starting from traditional point...... into the corresponding models are analysed. As a final step, emphasis is placed on generating space-time trajectories: this calls for the prediction of joint multivariate predictive densities describing wind power generation at a number of distributed locations and for a number of successive lead times. In addition......Optimal integration of wind energy into power systems calls for high quality wind power predictions. State-of-the-art forecasting systems typically provide forecasts for every location individually, without taking into account information coming from the neighbouring territories. It is however...

  20. Comparing the Accuracy of Copula-Based Multivariate Density Forecasts in Selected Regions of Support

    NARCIS (Netherlands)

    C.G.H. Diks (Cees); V. Panchenko (Valentyn); O. Sokolinskiy (Oleg); D.J.C. van Dijk (Dick)

    2013-01-01

    textabstractThis paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample)

  1. Comparing the accuracy of copula-based multivariate density forecasts in selected regions of support

    NARCIS (Netherlands)

    Diks, C.; Panchenko, V.; Sokolinskiy, O.; van Dijk, D.

    2013-01-01

    This paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample) conditional

  2. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  3. Wavelet-based multi-resolution analysis and artificial neural networks for forecasting temperature and thermal power consumption

    OpenAIRE

    Eynard , Julien; Grieu , Stéphane; Polit , Monique

    2011-01-01

    15 pages; International audience; As part of the OptiEnR research project, the present paper deals with outdoor temperature and thermal power consumption forecasting. This project focuses on optimizing the functioning of a multi-energy district boiler (La Rochelle, west coast of France), adding to the plant a thermal storage unit and implementing a model-based predictive controller. The proposed short-term forecast method is based on the concept of time series and uses both a wavelet-based mu...

  4. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    Science.gov (United States)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  5. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Wohling, Thomas [NON LANL

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  6. Application of the North American Multi-Model Ensemble to seasonal water supply forecasting in the Great Lakes basin through the use of the Great Lakes Seasonal Climate Forecast Tool

    Science.gov (United States)

    Gronewold, A.; Apps, D.; Fry, L. M.; Bolinger, R.

    2017-12-01

    The U.S. Army Corps of Engineers (USACE) contribution to the internationally coordinated 6-month forecast of Great Lakes water levels relies on several water supply models, including a regression model relating a coming month's water supply to past water supplies, previous months' precipitation and temperature, and forecasted precipitation and temperature. Probabilistic forecasts of precipitation and temperature depicted in the Climate Prediction Center's seasonal outlook maps are considered to be standard for use in operational forecasting for seasonal time horizons, and have provided the basis for computing a coming month's precipitation and temperature for use in the USACE water supply regression models. The CPC outlook maps are a useful forecast product offering insight into interpretation of climate models through the prognostic discussion and graphical forecasts. However, recent evolution of USACE forecast procedures to accommodate automated data transfer and manipulation offers a new opportunity for direct incorporation of ensemble climate forecast data into probabilistic outlooks of water supply using existing models that have previously been implemented in a deterministic fashion. We will present results from a study investigating the potential for applying data from the North American Multi-Model Ensemble to operational water supply forecasts. The use of NMME forecasts is facilitated by a new, publicly available, Great Lakes Seasonal Climate Forecast Tool that provides operational forecasts of monthly average temperatures and monthly total precipitation summarized for each lake basin.

  7. In core monitor having multi-step seals

    International Nuclear Information System (INIS)

    Kasai, Makoto; Ono, Susumu.

    1976-01-01

    Purpose: To completely prevent a sensor gas sealed in a pipe from leaking in an in-core neutron detector for use with a bwr type reactor. Constitution: In an in core monitor fabricated by disposing inner and outer electrodes in a housing, forming a layer of neutron conversion material on the outer electrode, filling an ionizing gas within the space between the layer and the inner electrode and, thereafter, attaching an insulation cable and an exhaust pipe respectively by way of insulators to both ends of the housing, the exhaust pipe is sealed in two-steps through pressure bonding using a multi-stepped pincher tool having two pressure bonding bits of a step shape and the outer sealing portion is further welded. The sensor gas sealed in the pipe can thus be prevented from leaking upon pressure bonding and welding. (Horiuchi, T.)

  8. In core monitor having multi-step seals

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, M; Ono, S

    1976-12-09

    A method to completely prevent a sensor gas sealed in a pipe from leaking in an in-core neutron detector for use with a BWR type reactor is described. In an in core monitor fabricated by disposing inner and outer electrodes in a housing, forming a layer of neutron conversion material on the outer electrode, filling an ionizing gas within the space between the layer and the inner electrode and, thereafter, attaching an insulation cable and an exhaust pipe respectively by way of insulators to both ends of the housing, the exhaust pipe is sealed in two-steps through pressure bonding using a multi-stepped pincher tool having two pressure bonding bits of a step shape and the outer sealing portion is further welded. The sensor gas sealed in the pipe can thus be prevented from leaking upon pressure bonding and welding.

  9. Initial assessment of a multi-model approach to spring flood forecasting in Sweden

    Science.gov (United States)

    Olsson, J.; Uvo, C. B.; Foster, K.; Yang, W.

    2015-06-01

    Hydropower is a major energy source in Sweden and proper reservoir management prior to the spring flood onset is crucial for optimal production. This requires useful forecasts of the accumulated discharge in the spring flood period (i.e. the spring-flood volume, SFV). Today's SFV forecasts are generated using a model-based climatological ensemble approach, where time series of precipitation and temperature from historical years are used to force a calibrated and initialised set-up of the HBV model. In this study, a number of new approaches to spring flood forecasting, that reflect the latest developments with respect to analysis and modelling on seasonal time scales, are presented and evaluated. Three main approaches, represented by specific methods, are evaluated in SFV hindcasts for three main Swedish rivers over a 10-year period with lead times between 0 and 4 months. In the first approach, historically analogue years with respect to the climate in the period preceding the spring flood are identified and used to compose a reduced ensemble. In the second, seasonal meteorological ensemble forecasts are used to drive the HBV model over the spring flood period. In the third approach, statistical relationships between SFV and the large-sale atmospheric circulation are used to build forecast models. None of the new approaches consistently outperform the climatological ensemble approach, but for specific locations and lead times improvements of 20-30 % are found. When combining all forecasts in a weighted multi-model approach, a mean improvement over all locations and lead times of nearly 10 % was indicated. This demonstrates the potential of the approach and further development and optimisation into an operational system is ongoing.

  10. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  11. State updating of a distributed hydrological model with Ensemble Kalman Filtering: Effects of updating frequency and observation network density on forecast accuracy

    Science.gov (United States)

    Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.

    2012-12-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of

  12. 48 CFR 15.202 - Advisory multi-step process.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Solicitation and Receipt of Proposals and Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204) that provides a general description of the scope or purpose of the acquisition and invites potential...

  13. A Multi-step and Multi-level approach for Computer Aided Molecular Design

    DEFF Research Database (Denmark)

    . The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution step...... using an Integrated Computer Aided System (ICAS) for result analysis and verification is included in the methodology. Keywords: CAMD, separation processes, knowledge base, molecular design, solvent selection, substitution, group contribution, property prediction, ICAS Introduction The use of Computer...... Aided Molecular Design (CAMD) for the identification of compounds having specific physic...

  14. Multi-step rearrangement mechanism for acetyl cedrene to the hydrocarbon follower

    DEFF Research Database (Denmark)

    Paknikar, Shashikumar Keshav; Kamounah, Fadhil S.; Hansen, Poul Erik

    2017-01-01

    Conversion of acetyl cedrene (2) to its follower (3) using acetic anhydride and polyphosphoric acid involves a multi-step cationic molecular rearrangement, which is consistent with deuteriation and 1-13C labeling studies of acetyl cedrene. The key step involves cyclopropylcarbinyl cation-cyclopro...

  15. Angular momentum in multi-step photoionization

    International Nuclear Information System (INIS)

    Yoshida, Tadashi; Adachi, Hajime; Kuwako, Akira; Nittoh, Koichi; Araki, Yoshio; Watanabe, Takashi; Yoguchi, Itaru.

    1995-01-01

    The effect of the angular momenta on the multi-step laser-ionization efficiency was investigated numerically for cases with and without the hyperfine interactions. For either cases the ionization efficiency proved to depend appreciably on the values of J in the excitation ladder. In this respect, we elaborated a simple and efficient method of determining J, which was based on the laser polarization dependence of the excitation rate. Application of this method to a couple of real excitation ladders proved its usefulness and reliability. (author)

  16. A novel hybrid ensemble learning paradigm for nuclear energy consumption forecasting

    International Nuclear Information System (INIS)

    Tang, Ling; Yu, Lean; Wang, Shuai; Li, Jianping; Wang, Shouyang

    2012-01-01

    Highlights: ► A hybrid ensemble learning paradigm integrating EEMD and LSSVR is proposed. ► The hybrid ensemble method is useful to predict time series with high volatility. ► The ensemble method can be used for both one-step and multi-step ahead forecasting. - Abstract: In this paper, a novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EEMD) and least squares support vector regression (LSSVR) is proposed for nuclear energy consumption forecasting, based on the principle of “decomposition and ensemble”. This hybrid ensemble learning paradigm is formulated specifically to address difficulties in modeling nuclear energy consumption, which has inherently high volatility, complexity and irregularity. In the proposed hybrid ensemble learning paradigm, EEMD, as a competitive decomposition method, is first applied to decompose original data of nuclear energy consumption (i.e. a difficult task) into a number of independent intrinsic mode functions (IMFs) of original data (i.e. some relatively easy subtasks). Then LSSVR, as a powerful forecasting tool, is implemented to predict all extracted IMFs independently. Finally, these predicted IMFs are aggregated into an ensemble result as final prediction, using another LSSVR. For illustration and verification purposes, the proposed learning paradigm is used to predict nuclear energy consumption in China. Empirical results demonstrate that the novel hybrid ensemble learning paradigm can outperform some other popular forecasting models in both level prediction and directional forecasting, indicating that it is a promising tool to predict complex time series with high volatility and irregularity.

  17. Impact of user influence on information multi-step communication in a micro-blog

    International Nuclear Information System (INIS)

    Wu Yue; Hu Yong; He Xiao-Hai; Deng Ken

    2014-01-01

    User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it

  18. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  19. Method of making stepped photographic density standards of radiographic photographs

    International Nuclear Information System (INIS)

    Borovin, I.V.; Kondina, M.A.

    1987-01-01

    In industrial radiography practice the need often arises for a prompt evaluation of the photographic density of an x-ray film. A method of making stepped photographic density standards for industrial radiography by contact printing from a negative is described. The method is intended for industrial radiation flaw detection laboratories not having specialized sensitometric equipment

  20. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    Science.gov (United States)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior

  1. Ultralow-density SiO2 aerogels prepared by a two-step sol-gel process

    International Nuclear Information System (INIS)

    Wang Jue; Li Qing; Shen Jun; Zhou Bin; Chen Lingyan; Jiang; Weiyang

    1996-01-01

    Low density SiO 2 gels are prepared by a two-step sol-gel process from TEOS. The influence of various solution ratios on the gelation process is investigated. The comparative characterization of gels using different solvent, such as ethanol, acetone and methyl cyanide, is also given. The ultralow-density SiO 2 aerogels with density less than 10 kg/m 3 are prepared by CO 2 supercritical drying technique. The structure difference between SiO 2 aerogels prepared by conventional single-step process and the two-step process is also presented

  2. Simulation of self-focusing of laser beam through medium with multi-step photo-ionization

    International Nuclear Information System (INIS)

    Akaoka, Katsuaki; Wakaida, Ikuo; Arisawa, Takashi

    1995-01-01

    We built a computation code for the coupled nonlinear Maxwell-Density Matrix equations of multi-level atomic systems including transverse and time-dependent variations. Numerical solutions for two-level atomic systems shown as a function of laser detuning in Na and U are in good agreement with the experimental result. Applying this code to the laser beam propagation through medium with two-step photo-ionization, it is concluded that the group velocity in the spatial edge of a laser pulse is slower than that in the center, and the self-focusing and the temporal reshaping of the laser pulse used for the first-excitation are more distinguished than that used for ionization. (author)

  3. Real-time data processing and inflow forecasting

    International Nuclear Information System (INIS)

    Olason, T.; Lafreniere, M.

    1998-01-01

    One of the key inputs into the short-term scheduling of hydroelectric generation is inflow forecasting which is needed for natural or unregulated inflows into various lakes, reservoirs and river sections. The forecast time step and time horizon are determined by the time step and the scheduling horizon. Acres International Ltd. has developed the Vista Decision Support System (DSS) in which the time step is one hour and the scheduling can be done up to two weeks into the future. This paper presents the basis of the operational flow-forecasting module of the Vista DSS software and its application to flow forecasting for 16 basins within Nova Scotia Power's hydroelectric system. Among the tasks performed by the software are collection and treatment of data (in real time) regarding meteorological forecasts, reviews and monitoring of hydro-meteorological data, updating of the state variables in the module, and the review and adjustment of sub-watershed forecasts

  4. Forecasting with Option-Implied Information

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Chang, Bo Young

    2013-01-01

    This chapter surveys the methods available for extracting information from option prices that can be used in forecasting. We consider option-implied volatilities, skewness, kurtosis, and densities. More generally, we discuss how any forecasting object that is a twice differentiable function...... of the future realization of the underlying risky asset price can utilize option-implied information in a well-defined manner. Going beyond the univariate option-implied density, we also consider results on option-implied covariance, correlation and beta forecasting, as well as the use of option......-implied information in cross-sectional forecasting of equity returns. We discuss how option-implied information can be adjusted for risk premia to remove biases in forecasting regressions....

  5. Efficient multi-scenario Model Predictive Control for water resources management with ensemble streamflow forecasts

    Science.gov (United States)

    Tian, Xin; Negenborn, Rudy R.; van Overloop, Peter-Jules; María Maestre, José; Sadowska, Anna; van de Giesen, Nick

    2017-11-01

    Model Predictive Control (MPC) is one of the most advanced real-time control techniques that has been widely applied to Water Resources Management (WRM). MPC can manage the water system in a holistic manner and has a flexible structure to incorporate specific elements, such as setpoints and constraints. Therefore, MPC has shown its versatile performance in many branches of WRM. Nonetheless, with the in-depth understanding of stochastic hydrology in recent studies, MPC also faces the challenge of how to cope with hydrological uncertainty in its decision-making process. A possible way to embed the uncertainty is to generate an Ensemble Forecast (EF) of hydrological variables, rather than a deterministic one. The combination of MPC and EF results in a more comprehensive approach: Multi-scenario MPC (MS-MPC). In this study, we will first assess the model performance of MS-MPC, considering an ensemble streamflow forecast. Noticeably, the computational inefficiency may be a critical obstacle that hinders applicability of MS-MPC. In fact, with more scenarios taken into account, the computational burden of solving an optimization problem in MS-MPC accordingly increases. To deal with this challenge, we propose the Adaptive Control Resolution (ACR) approach as a computationally efficient scheme to practically reduce the number of control variables in MS-MPC. In brief, the ACR approach uses a mixed-resolution control time step from the near future to the distant future. The ACR-MPC approach is tested on a real-world case study: an integrated flood control and navigation problem in the North Sea Canal of the Netherlands. Such an approach reduces the computation time by 18% and up in our case study. At the same time, the model performance of ACR-MPC remains close to that of conventional MPC.

  6. Forecasting Hurricane Tracks Using a Complex Adaptive System

    National Research Council Canada - National Science Library

    Lear, Matthew R

    2005-01-01

    Forecast hurricane tracks using a multi-model ensemble that consists of linearly combining the individual model forecasts have greatly reduced the average forecast errors when compared to individual...

  7. Use of MLCM3 Software for Flash Flood Modeling and Forecasting

    Directory of Open Access Journals (Sweden)

    Inna Pivovarova

    2018-01-01

    Full Text Available Accurate and timely flash floods forecasting, especially, in ungauged and poorly gauged basins, is one of the most important and challenging problems to be solved by the international hydrological community. In changing climate and variable anthropogenic impact on river basins, as well as due to low density of surface hydrometeorological network, flash flood forecasting based on “traditional” physically based, or conceptual, or statistical hydrological models often becomes inefficient. Unfortunately, most of river basins in Russia are poorly gauged or ungauged; besides, lack of hydrogeological data is quite typical. However, the developing economy and population safety necessitate issuing warnings based on reliable forecasts. For this purpose, a new hydrological model, MLCM3 (Multi-Layer Conceptual Model, 3 rd generation has been developed in the Russian State Hydrometeorological University. The model showed good results in more than 50 tested basins.

  8. Multi-Annual Climate Predictions for Fisheries: An Assessment of Skill of Sea Surface Temperature Forecasts for Large Marine Ecosystems

    Directory of Open Access Journals (Sweden)

    Desiree Tommasi

    2017-06-01

    Full Text Available Decisions made by fishers and fisheries managers are informed by climate and fisheries observations that now often span more than 50 years. Multi-annual climate forecasts could further inform such decisions if they were skillful in predicting future conditions relative to the 50-year scope of past variability. We demonstrate that an existing multi-annual prediction system skillfully forecasts the probability of next year, the next 1–3 years, and the next 1–10 years being warmer or cooler than the 50-year average at the surface in coastal ecosystems. Probabilistic forecasts of upper and lower seas surface temperature (SST terciles over the next 3 or 10 years from the GFDL CM 2.1 10-member ensemble global prediction system showed significant improvements in skill over the use of a 50-year climatology for most Large Marine Ecosystems (LMEs in the North Atlantic, the western Pacific, and Indian oceans. Through a comparison of the forecast skill of initialized and uninitialized hindcasts, we demonstrate that this skill is largely due to the predictable signature of radiative forcing changes over the 50-year timescale rather than prediction of evolving modes of climate variability. North Atlantic LMEs stood out as the only coastal regions where initialization significantly contributed to SST prediction skill at the 1 to 10 year scale.

  9. Photon Production through Multi-step Processes Important in Nuclear Fluorescence Experiments

    International Nuclear Information System (INIS)

    Hagmann, C; Pruet, J

    2006-01-01

    The authors present calculations describing the production of photons through multi-step processes occurring when a beam of gamma rays interacts with a macroscopic material. These processes involve the creation of energetic electrons through Compton scattering, photo-absorption and pair production, the subsequent scattering of these electrons, and the creation of energetic photons occurring as these electrons are slowed through Bremsstrahlung emission. Unlike single Compton collisions, during which an energetic photon that is scattered through a large angle loses most of its energy, these multi-step processes result in a sizable flux of energetic photons traveling at large angles relative to an incident photon beam. These multi-step processes are also a key background in experiments that measure nuclear resonance fluorescence by shining photons on a thin foil and observing the spectrum of back-scattered photons. Effective cross sections describing the production of backscattered photons are presented in a tabular form that allows simple estimates of backgrounds expected in a variety of experiments. Incident photons with energies between 0.5 MeV and 8 MeV are considered. These calculations of effective cross sections may be useful for those designing NRF experiments or systems that detect specific isotopes in well-shielded environments through observation of resonance fluorescence

  10. Multivariate statistical analysis of a multi-step industrial processes

    DEFF Research Database (Denmark)

    Reinikainen, S.P.; Høskuldsson, Agnar

    2007-01-01

    Monitoring and quality control of industrial processes often produce information on how the data have been obtained. In batch processes, for instance, the process is carried out in stages; some process or control parameters are set at each stage. However, the obtained data might not be utilized...... efficiently, even if this information may reveal significant knowledge about process dynamics or ongoing phenomena. When studying the process data, it may be important to analyse the data in the light of the physical or time-wise development of each process step. In this paper, a unified approach to analyse...... multivariate multi-step processes, where results from each step are used to evaluate future results, is presented. The methods presented are based on Priority PLS Regression. The basic idea is to compute the weights in the regression analysis for given steps, but adjust all data by the resulting score vectors...

  11. A Complex Adaptive System Approach to Forecasting Hurricane Tracks

    National Research Council Canada - National Science Library

    Lear, Matthew R

    2005-01-01

    Forecast hurricane tracks using a multi-model ensemble that consists of linearly combining the individual model forecasts have greatly reduced the average forecast errors when compared to individual...

  12. Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor

    Directory of Open Access Journals (Sweden)

    Amos O. Anele

    2018-04-01

    Full Text Available In a previous paper, a number of potential models for short-term water demand (STWD prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017 showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs. MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP, proposed by Todini (2008, to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA, feed-forward back propagation neural network (FFBP-NN and hybrid model (i.e., combined forecast from ARMA and FFBP-NN. The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision.

  13. Probabilistic forecasting of wind power at the minute time-scale with Markov-switching autoregressive models

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2008-01-01

    Better modelling and forecasting of very short-term power fluctuations at large offshore wind farms may significantly enhance control and management strategies of their power output. The paper introduces a new methodology for modelling and forecasting such very short-term fluctuations. The proposed...... consists in 1-step ahead forecasting exercise on time-series of wind generation with a time resolution of 10 minute. The quality of the introduced forecasting methodology and its interest for better understanding power fluctuations are finally discussed....... methodology is based on a Markov-switching autoregressive model with time-varying coefficients. An advantage of the method is that one can easily derive full predictive densities. The quality of this methodology is demonstrated from the test case of 2 large offshore wind farms in Denmark. The exercise...

  14. Carbide induced reconstruction of monatomic steps on Ni(111) - A density functional study

    DEFF Research Database (Denmark)

    Andersson, Martin; Abild-Pedersen, Frank

    2007-01-01

    We present density functional calculations for carbon adsorption at the two types of monatomic steps on a Ni(111) surface. We show that it is thermodynamically favourable to make a carbon induced clock-type reconstruction at the close-packed step with a [111] step geometry, which creates fourfold...

  15. Skill of real-time operational forecasts with the APCC multi-model ensemble prediction system during the period 2008-2015

    Science.gov (United States)

    Min, Young-Mi; Kryjov, Vladimir N.; Oh, Sang Myeong; Lee, Hyun-Ju

    2017-12-01

    This paper assesses the real-time 1-month lead forecasts of 3-month (seasonal) mean temperature and precipitation on a monthly basis issued by the Asia-Pacific Economic Cooperation Climate Center (APCC) for 2008-2015 (8 years, 96 forecasts). It shows the current level of the APCC operational multi-model prediction system performance. The skill of the APCC forecasts strongly depends on seasons and regions that it is higher for the tropics and boreal winter than for the extratropics and boreal summer due to direct effects and remote teleconnections from boundary forcings. There is a negative relationship between the forecast skill and its interseasonal variability for both variables and the forecast skill for precipitation is more seasonally and regionally dependent than that for temperature. The APCC operational probabilistic forecasts during this period show a cold bias (underforecasting of above-normal temperature and overforecasting of below-normal temperature) underestimating a long-term warming trend. A wet bias is evident for precipitation, particularly in the extratropical regions. The skill of both temperature and precipitation forecasts strongly depends upon the ENSO strength. Particularly, the highest forecast skill noted in 2015/2016 boreal winter is associated with the strong forcing of an extreme El Nino event. Meanwhile, the relatively low skill is associated with the transition and/or continuous ENSO-neutral phases of 2012-2014. As a result the skill of real-time forecast for boreal winter season is higher than that of hindcast. However, on average, the level of forecast skill during the period 2008-2015 is similar to that of hindcast.

  16. PEGASUS: a preequilibrium and multi-step evaporation code for neutron cross section calculation

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Tsuneo; Sugi, Teruo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Iijima, Shungo; Nishigori, Takeo

    1999-06-01

    The computer code PEGASUS was developed to calculate neutron-induced reaction cross sections on the basis of the closed form exciton model preequilibrium theory and the multi-step evaporation theory. The cross sections and emitted particle spectra are calculated for the compound elastic scattering, (n,{gamma}), (n,n`), (n,p), (n,{alpha}), (n,d), (n,t), (n,{sup 3}He), (n,2n), (n,n`p), (n,n`{alpha}), (n,n`d), (n,n`t), (n,2p) and (n,3n) reactions. The double differential cross sections of emitted particles are also calculated. The calculated results are written on a magnetic disk in the ENDF format. Parameter files and/or systematics formulas are provided for level densities, mass excess, radiation widths and inverse cross sections so that the input data to the code are made minimum. (author)

  17. High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.

    Science.gov (United States)

    Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent

    2016-08-01

    Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.

  18. A Hybrid Multi-Step Rolling Forecasting Model Based on SSA and Simulated Annealing—Adaptive Particle Swarm Optimization for Wind Speed

    Directory of Open Access Journals (Sweden)

    Pei Du

    2016-08-01

    Full Text Available With the limitations of conventional energy becoming increasing distinct, wind energy is emerging as a promising renewable energy source that plays a critical role in the modern electric and economic fields. However, how to select optimization algorithms to forecast wind speed series and improve prediction performance is still a highly challenging problem. Traditional single algorithms are widely utilized to select and optimize parameters of neural network algorithms, but these algorithms usually ignore the significance of parameter optimization, precise searching, and the application of accurate data, which results in poor forecasting performance. With the aim of overcoming the weaknesses of individual algorithms, a novel hybrid algorithm was created, which can not only easily obtain the real and effective wind speed series by using singular spectrum analysis, but also possesses stronger adaptive search and optimization capabilities than the other algorithms: it is faster, has fewer parameters, and is less expensive. For the purpose of estimating the forecasting ability of the proposed combined model, 10-min wind speed series from three wind farms in Shandong Province, eastern China, are employed as a case study. The experimental results were considerably more accurately predicted by the presented algorithm than the comparison algorithms.

  19. Operational specification and forecasting advances for Dst, LEO thermospheric densities, and aviation radiation dose and dose rate

    Science.gov (United States)

    Tobiska, W. Kent

    Space weather’s effects upon the near-Earth environment are due to dynamic changes in the energy transfer processes from the Sun’s photons, particles, and fields. Of the space environment domains that are affected by space weather, the magnetosphere, thermosphere, and even troposphere are key regions that are affected. Space Environment Technologies (SET) has developed and is producing innovative space weather applications. Key operational systems for providing timely information about the effects of space weather on these domains are SET’s Magnetosphere Alert and Prediction System (MAPS), LEO Alert and Prediction System (LAPS), and Automated Radiation Measurements for Aviation Safety (ARMAS) system. MAPS provides a forecast Dst index out to 6 days through the data-driven, redundant data stream Anemomilos algorithm. Anemomilos uses observational proxies for the magnitude, location, and velocity of solar ejecta events. This forecast index is used by satellite operations to characterize upcoming geomagnetic storms, for example. In addition, an ENLIL/Rice Dst prediction out to several days has also been developed and will be described. LAPS is the SET fully redundant operational system providing recent history, current epoch, and forecast solar and geomagnetic indices for use in operational versions of the JB2008 thermospheric density model. The thermospheric densities produced by that system, driven by the LAPS data, are forecast to 72-hours to provide the global mass densities for satellite operators. ARMAS is a project that has successfully demonstrated the operation of a micro dosimeter on aircraft to capture the real-time radiation environment due to Galactic Cosmic Rays and Solar Energetic Particles. The dose and dose-rates are captured on aircraft, downlinked in real-time via the Iridium satellites, processed on the ground, incorporated into the most recent NAIRAS global radiation climatology data runs, and made available to end users via the web and

  20. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  1. Forecasting the future of biodiversity

    DEFF Research Database (Denmark)

    Fitzpatrick, M. C.; Sanders, Nate; Ferrier, Simon

    2011-01-01

    , but their application to forecasting climate change impacts on biodiversity has been limited. Here we compare forecasts of changes in patterns of ant biodiversity in North America derived from ensembles of single-species models to those from a multi-species modeling approach, Generalized Dissimilarity Modeling (GDM...... climate change impacts on biodiversity....

  2. Multi-step process for concentrating magnetic particles in waste sludges

    Science.gov (United States)

    Watson, John L.

    1990-01-01

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.

  3. Flood forecasting and warning systems in Pakistan

    International Nuclear Information System (INIS)

    Ali Awan, Shaukat

    2004-01-01

    Meteorologically, there are two situations which may cause three types of floods in Indus Basin in Pakistan: i) Meteorological Situation for Category-I Floods when the seasonal low is a semi permanent weather system situated over south eastern Balochistan, south western Punjab, adjoining parts of Sindh get intensified and causes the moisture from the Arabian Sea to be brought up to upper catchments of Chenab and Jhelum rivers. (ii) Meteorological Situation for Category-11 and Category-111 Floods, which is linked with monsoon low/depression. Such monsoon systems originate in Bay of Bengal region and then move across India in general west/north westerly direction arrive over Rajasthan or any of adjoining states of India. Flood management in Pakistan is multi-functional process involving a number of different organizations. The first step in the process is issuance of flood forecast/warning, which is performed by Pakistan Meteorological Department (PMD) utilizing satellite cloud pictures and quantitative precipitation measurement radar data, in addition to the conventional weather forecasting facilities. For quantitative flood forecasting, hydrological data is obtained through the Provincial Irrigation Department and WAPDA. Furthermore, improved rainfall/runoff and flood routing models have been developed to provide more reliable and explicit flood information to a flood prone population.(Author)

  4. Urban air quality forecasting based on multi-dimensional collaborative Support Vector Regression (SVR): A case study of Beijing-Tianjin-Shijiazhuang.

    Science.gov (United States)

    Liu, Bing-Chun; Binaykia, Arihant; Chang, Pei-Chann; Tiwari, Manoj Kumar; Tsao, Cheng-Chin

    2017-01-01

    Today, China is facing a very serious issue of Air Pollution due to its dreadful impact on the human health as well as the environment. The urban cities in China are the most affected due to their rapid industrial and economic growth. Therefore, it is of extreme importance to come up with new, better and more reliable forecasting models to accurately predict the air quality. This paper selected Beijing, Tianjin and Shijiazhuang as three cities from the Jingjinji Region for the study to come up with a new model of collaborative forecasting using Support Vector Regression (SVR) for Urban Air Quality Index (AQI) prediction in China. The present study is aimed to improve the forecasting results by minimizing the prediction error of present machine learning algorithms by taking into account multiple city multi-dimensional air quality information and weather conditions as input. The results show that there is a decrease in MAPE in case of multiple city multi-dimensional regression when there is a strong interaction and correlation of the air quality characteristic attributes with AQI. Also, the geographical location is found to play a significant role in Beijing, Tianjin and Shijiazhuang AQI prediction.

  5. The importance of the reference populations for coherent mortality forecasting models

    DEFF Research Database (Denmark)

    Kjærgaard, Søren; Canudas-Romo, Vladimir; Vaupel, James W.

    -population mortality models aiming to find the optimal of the set of countries to use as reference population and analyse the importance of the selection of countries. The two multi-population mortality models used are the Li-Lee model and the Double-Gap life expectancy forecasting model. The reference populations......Coherent forecasting models that take into consideration mortality changes observed in different countries are today among the essential tools for demographers, actuaries and other researchers interested in forecasts. Medium and long term life expectancy forecasts are compared for two multi...... is calculated taking into account all the possible combinations of a set of 20 industrialized countries. The different reference populations possibilities are compared by their forecast performance. The results show that the selection of countries for multi-population mortality models has a significant effect...

  6. Integrating a Storage Factor into R-NARX Neural Networks for Flood Forecasts

    Science.gov (United States)

    Chou, Po-Kai; Chang, Li-Chiu; Chang, Fi-John; Shih, Ban-Jwu

    2017-04-01

    Because mountainous terrains and steep landforms rapidly accelerate the speed of flood flow in Taiwan island, accurate multi-step-ahead inflow forecasts during typhoon events for providing reliable information benefiting the decision-makings of reservoir pre-storm release and flood-control operation are considered crucial and challenging. Various types of artificial neural networks (ANNs) have been successfully applied in hydrological fields. This study proposes a recurrent configuration of the nonlinear autoregressive with exogenous inputs (NARX) network, called R-NARX, with various effective inputs to forecast the inflows of the Feitsui Reservoir, a pivot reservoir for water supply to Taipei metropolitan in Taiwan, during typhoon periods. The proposed R-NARX is constructed based on the recurrent neural network (RNN), which is commonly used for modelling nonlinear dynamical systems. A large number of hourly rainfall and inflow data sets collected from 95 historical typhoon events in the last thirty years are used to train, validate and test the models. The potential input variables, including rainfall in previous time steps (one to six hours), cumulative rainfall, the storage factor and the storage function, are assessed, and various models are constructed with their reliability and accuracy being tested. We find that the previous (t-2) rainfall and cumulative rainfall are crucial inputs and the storage factor and the storage function would also improve the forecast accuracy of the models. We demonstrate that the R-NARX model not only can accurately forecast the inflows but also effectively catch the peak flow without adopting observed inflow data during the entire typhoon period. Besides, the model with the storage factor is superior to the model with the storage function, where its improvement can reach 24%. This approach can well model the rainfall-runoff process for the entire flood forecasting period without the use of observed inflow data and can provide

  7. Towards single step production of multi-layer inorganic hollow fibers

    NARCIS (Netherlands)

    de Jong, J.; Benes, Nieck Edwin; Koops, G.H.; Wessling, Matthias

    2004-01-01

    In this work we propose a generic synthesis route for the single step production of multi-layer inorganic hollow fibers, based on polymer wet spinning combined with a heat treatment. With this new method, membranes with a high surface area per unit volume ratio can be produced, while production time

  8. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    Science.gov (United States)

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  9. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    Science.gov (United States)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  10. Low-wave number analysis of observations and ensemble forecasts to develop metrics for the selection of most realistic members to study multi-scale interactions between the environment and the convective organization of hurricanes: Focus on Rapid Intensification

    Science.gov (United States)

    Hristova-Veleva, S. M.; Chen, H.; Gopalakrishnan, S.; Haddad, Z. S.

    2017-12-01

    Tropical cyclones (TCs) are the product of complex multi-scale processes and interactions. The role of the environment has long been recognized. However, recent research has shown that convective-scale processes in the hurricane core might also play a crucial role in determining TCs intensity and size. Several studies have linked Rapid Intensification to the characteristics of the convective clouds (shallow versus deep), their organization (isolated versus wide-spread) and their location with respect to dynamical controls (the vertical shear, the radius of maximum wind). Yet a third set of controls signifies the interaction between the storm-scale and large-scale processes. Our goal is to use observations and models to advance the still-lacking understanding of these processes. Recently, hurricane models have improved significantly. However, deterministic forecasts have limitations due to the uncertainty in the representation of the physical processes and initial conditions. A crucial step forward is the use of high-resolution ensembles. We adopt the following approach: i) generate a high resolution ensemble forecast using HWRF; ii) produce synthetic data (e.g. brightness temperature) from the model fields for direct comparison to satellite observations; iii) develop metrics to allow us to sub-select the realistic members of the ensemble, based on objective measures of the similarity between observed and forecasted structures; iv) for these most-realistic members, determine the skill in forecasting TCs to provide"guidance on guidance"; v) use the members with the best predictive skill to untangle the complex multi-scale interactions. We will report on the first three goals of our research, using forecasts and observations of hurricane Edouard (2014), focusing on RI. We will focus on describing the metrics for the selection of the most appropriate ensemble members, based on applying low-wave number analysis (WNA - Hristova-Veleva et al., 2016) to the observed and

  11. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  12. Improving the effectiveness of real-time flood forecasting through Predictive Uncertainty estimation: the multi-temporal approach

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Todini, Ezio

    2015-04-01

    The negative effects of severe flood events are usually contrasted through structural measures that, however, do not fully eliminate flood risk. Non-structural measures, such as real-time flood forecasting and warning, are also required. Accurate stage/discharge future predictions with appropriate forecast lead-time are sought by decision-makers for implementing strategies to mitigate the adverse effects of floods. Traditionally, flood forecasting has been approached by using rainfall-runoff and/or flood routing modelling. Indeed, both types of forecasts, cannot be considered perfectly representing future outcomes because of lacking of a complete knowledge of involved processes (Todini, 2004). Nonetheless, although aware that model forecasts are not perfectly representing future outcomes, decision makers are de facto implicitly assuming the forecast of water level/discharge/volume, etc. as "deterministic" and coinciding with what is going to occur. Recently the concept of Predictive Uncertainty (PU) was introduced in hydrology (Krzysztofowicz, 1999), and several uncertainty processors were developed (Todini, 2008). PU is defined as the probability of occurrence of the future realization of a predictand (water level/discharge/volume) conditional on: i) prior observations and knowledge, ii) the available information obtained on the future value, typically provided by one or more forecast models. Unfortunately, PU has been frequently interpreted as a measure of lack of accuracy rather than the appropriate tool allowing to take the most appropriate decisions, given a model or several models' forecasts. With the aim to shed light on the benefits for appropriately using PU, a multi-temporal approach based on the MCP approach (Todini, 2008; Coccia and Todini, 2011) is here applied to stage forecasts at sites along the Upper Tiber River. Specifically, the STAge Forecasting-Rating Curve Model Muskingum-based (STAFOM-RCM) (Barbetta et al., 2014) along with the Rating

  13. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  14. Time-dependent density functional theory for multi-component systems

    International Nuclear Information System (INIS)

    Tiecheng Li; Peiqing Tong

    1985-10-01

    The Runge-Gross version of Hohenberg-Kohn-Sham's density functional theory is generalized to multi-component systems, both for arbitrary time-dependent pure states and for arbitrary time-dependent ensembles. (author)

  15. Incremental Learning of Medical Data for Multi-Step Patient Health Classification

    DEFF Research Database (Denmark)

    Kranen, Philipp; Müller, Emmanuel; Assent, Ira

    2010-01-01

    of textile sensors, body sensors and preprocessing techniques as well as the integration and merging of sensor data in electronic health record systems. Emergency detection on multiple levels will show the benefits of multi-step classification and further enhance the scalability of emergency detection...

  16. Multi-step direct reactions at low energies

    International Nuclear Information System (INIS)

    Marcinkowski, A.; Marianski, B.

    2001-01-01

    Full text: The theory of the multistep direct (MSD) reactions of Feshbach, Kerman and Koonin has for quite some time become a subject of controversy due to the bi orthogonal distorted waves involved in the transition amplitudes describing the MSD cross sections. The bi orthogonal wave functions result in non-normal DWBA matrix elements, that can be expressed in terms of normal DWBA matrix elements multiplied by the inverse elastic scattering S-matrix. It has been argued that the enhancing inverse S-factors are washed out by averaging over energy in the continuum. As a result normal DWBA matrix elements are commonly used in practical calculations. Almost all analyses of inelastic scattering and charge-exchange reactions using the DWBA matrix elements have concluded that nucleon emission at low energies can be described as one-step reaction mainly. On the other hand, it has been shown that the limits imposed by the energy weighted sum rules (EWSR's) on transition of given angular momentum transfer lead to a significant reduction of the one step cross section that can be compensated by the enhanced MSD cross sections obtained with the use of the non-normal DWBA matrix elements. Very recently the MSD theory of FKK was modified to include collective excitations and the non-normal DWBA matrix elements and the prescription for calculations of the cross sections for the MSD reactions was given. In the present paper we present the results of the modified theory used for describing the 93 Nb (n,xn) 93 Nb reaction at incident energy of 20 MeV and the 65 Cu (p,xn) 65 Zn reaction at 27 MeV. The results show enhanced contributions from two-, three- and four step reactions. We investigate the importance of the multi-phonon, multi particle hole and the mixed particle hole-phonon excitations in neutron scattering to the continuum. We also show the importance of the different sequences of collisions of the leading continuum nucleon that contribute to the MSD (p,n) reaction. When all

  17. Using Temperature Forecasts to Improve Seasonal Streamflow Forecasts in the Colorado and Rio Grande Basins

    Science.gov (United States)

    Lehner, F.; Wood, A.; Llewellyn, D.; Blatchford, D. B.; Goodbody, A. G.; Pappenberger, F.

    2017-12-01

    Recent studies have documented the influence of increasing temperature on streamflow across the American West, including snow-melt driven rivers such as the Colorado or Rio Grande. At the same time, some basins are reporting decreasing skill in seasonal streamflow forecasts, termed water supply forecasts (WSFs), over the recent decade. While the skill in seasonal precipitation forecasts from dynamical models remains low, their skill in predicting seasonal temperature variations could potentially be harvested for WSFs to account for non-stationarity in regional temperatures. Here, we investigate whether WSF skill can be improved by incorporating seasonal temperature forecasts from dynamical forecasting models (from the North American Multi Model Ensemble and the European Centre for Medium-Range Weather Forecast System 4) into traditional statistical forecast models. We find improved streamflow forecast skill relative to traditional WSF approaches in a majority of headwater locations in the Colorado and Rio Grande basins. Incorporation of temperature into WSFs thus provides a promising avenue to increase the robustness of current forecasting techniques in the face of continued regional warming.

  18. High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong

    2008-02-01

    Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.

  19. Data-based control of a multi-step forming process

    Science.gov (United States)

    Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.

    2017-09-01

    The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.

  20. Multi-configuration time-dependent density-functional theory based on range separation

    DEFF Research Database (Denmark)

    Fromager, E.; Knecht, S.; Jensen, Hans Jørgen Aagaard

    2013-01-01

    Multi-configuration range-separated density-functional theory is extended to the time-dependent regime. An exact variational formulation is derived. The approximation, which consists in combining a long-range Multi-Configuration- Self-Consistent Field (MCSCF) treatment with an adiabatic short...... (srGGA) approximations. As expected, when modeling long-range interactions with the MCSCF model instead of the adiabatic Buijse-Baerends density-matrix functional as recently proposed by Pernal [J. Chem. Phys. 136, 184105 (2012)10.1063/1.4712019], the description of both the 1D doubly-excited state...

  1. Wind and load forecast error model for multiple geographically distributed forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Makarov, Yuri V.; Reyes-Spindola, Jorge F.; Samaan, Nader; Diao, Ruisheng; Hafen, Ryan P. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2010-07-01

    The impact of wind and load forecast errors on power grid operations is frequently evaluated by conducting multi-variant studies, where these errors are simulated repeatedly as random processes based on their known statistical characteristics. To simulate these errors correctly, we need to reflect their distributions (which do not necessarily follow a known distribution law), standard deviations. auto- and cross-correlations. For instance, load and wind forecast errors can be closely correlated in different zones of the system. This paper introduces a new methodology for generating multiple cross-correlated random processes to produce forecast error time-domain curves based on a transition probability matrix computed from an empirical error distribution function. The matrix will be used to generate new error time series with statistical features similar to observed errors. We present the derivation of the method and some experimental results obtained by generating new error forecasts together with their statistics. (orig.)

  2. Evaluation of accuracy in implant site preparation performed in single- or multi-step drilling procedures.

    Science.gov (United States)

    Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus

    2018-06-01

    Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.

  3. Multi-type Step-wise group screening designs with unequal A-priori ...

    African Journals Online (AJOL)

    ... design with unequal group sizes and obtain values of the group sizes that minimize the expected number of runs.. Keywords: Group Screening, Group factors, multi-type step-wise group screening, expected number of runs, Optimum group screening designs > East African Journal of Statistics Vol. 1 (1) 2005: pp. 49-67 ...

  4. Using constructed analogs to improve the skill of National Multi-Model Ensemble March–April–May precipitation forecasts in equatorial East Africa

    International Nuclear Information System (INIS)

    Shukla, Shraddhanand; Funk, Christopher; Hoell, Andrew

    2014-01-01

    In this study we implement and evaluate a simple ‘hybrid’ forecast approach that uses constructed analogs (CA) to improve the National Multi-Model Ensemble’s (NMME) March–April–May (MAM) precipitation forecasts over equatorial eastern Africa (hereafter referred to as EA, 2°S to 8°N and 36°E to 46°E). Due to recent declines in MAM rainfall, increases in population, land degradation, and limited technological advances, this region has become a recent epicenter of food insecurity. Timely and skillful precipitation forecasts for EA could help decision makers better manage their limited resources, mitigate socio-economic losses, and potentially save human lives. The ‘hybrid approach’ described in this study uses the CA method to translate dynamical precipitation and sea surface temperature (SST) forecasts over the Indian and Pacific Oceans (specifically 30°S to 30°N and 30°E to 270°E) into terrestrial MAM precipitation forecasts over the EA region. In doing so, this approach benefits from the post-1999 teleconnection that exists between precipitation and SSTs over the Indian and tropical Pacific Oceans (Indo-Pacific) and EA MAM rainfall. The coupled atmosphere-ocean dynamical forecasts used in this study were drawn from the NMME. We demonstrate that while the MAM precipitation forecasts (initialized in February) skill of the NMME models over the EA region itself is negligible, the ranked probability skill score of hybrid CA forecasts based on Indo-Pacific NMME precipitation and SST forecasts reach up to 0.45. (letter)

  5. Uncertainty Reduction in Power Generation Forecast Using Coupled Wavelet-ARIMA

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Zhangshuan; Etingov, Pavel V.; Makarov, Yuri V.; Samaan, Nader A.

    2014-10-27

    In this paper, we introduce a new approach without implying normal distributions and stationarity of power generation forecast errors. In addition, it is desired to more accurately quantify the forecast uncertainty by reducing prediction intervals of forecasts. We use automatically coupled wavelet transform and autoregressive integrated moving-average (ARIMA) forecasting to reflect multi-scale variability of forecast errors. The proposed analysis reveals slow-changing “quasi-deterministic” components of forecast errors. This helps improve forecasts produced by other means, e.g., using weather-based models, and reduce forecast errors prediction intervals.

  6. Added economic value of limited area multi-EPS weather forecasting applications

    Directory of Open Access Journals (Sweden)

    Alex Deckmyn

    2012-07-01

    Full Text Available We compare the GLAMEPS system, a pan-European limited area ensemble prediction system, with ECMWF's EPS over Belgium for an extended period from March 2010 until the end of December 2010. In agreement with a previous study, we find GLAMEPS scores considerably better than ECMWF's EPS. To compute the economic value, we introduce a new relative economic value score for continuous forecasts. The added value of combining the GLAMEPS system with the LAEF system over Belgium is studied. We conclude that adding LAEF to GLAMEPS increases the value, although the increase is small compared to the improvement of GLAMEPS to ECMWF's EPS. As an added benefit we find that the combined GLAMEPS-LAEF multi-EPS system is more robust, that is, it is less vulnerable to the (accidental removal of one of its components.

  7. KAPSIES: A program for the calculation of multi-step direct reaction cross sections

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1994-09-01

    We present a program for the calculation of continuum cross sections, sepctra, angular distributions and analyzing powers according to various quantum-mechanical theories for statistical multi-step direct nuclear reactions. (orig.)

  8. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    Science.gov (United States)

    Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev

    2016-03-01

    Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  9. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    Directory of Open Access Journals (Sweden)

    T. Schmidt

    2016-03-01

    Full Text Available Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1–2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  10. Evaluating the spatio-temporal performance of sky imager based solar irradiance analysis and forecasts

    Science.gov (United States)

    Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.

    2015-10-01

    Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  11. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  12. Evaluation for Long Term PM10 Concentration Forecasting using Multi Linear Regression (MLR and Principal Component Regression (PCR Models

    Directory of Open Access Journals (Sweden)

    Samsuri Abdullah

    2016-07-01

    Full Text Available Air pollution in Peninsular Malaysia is dominated by particulate matter which is demonstrated by having the highest Air Pollution Index (API value compared to the other pollutants at most part of the country. Particulate Matter (PM10 forecasting models development is crucial because it allows the authority and citizens of a community to take necessary actions to limit their exposure to harmful levels of particulates pollution and implement protection measures to significantly improve air quality on designated locations. This study aims in improving the ability of MLR using PCs inputs for PM10 concentrations forecasting. Daily observations for PM10 in Kuala Terengganu, Malaysia from January 2003 till December 2011 were utilized to forecast PM10 concentration levels. MLR and PCR (using PCs input models were developed and the performance was evaluated using RMSE, NAE and IA. Results revealed that PCR performed better than MLR due to the implementation of PCA which reduce intricacy and eliminate data multi-collinearity.

  13. Adaptive modelling and forecasting of offshore wind power fluctuations with Markov-switching autoregressive models

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    optimized is based on penalized maximum-likelihood, with exponential forgetting of past observations. MSAR models are then employed for 1-step-ahead point forecasting of 10-minute resolution time-series of wind power at two large offshore wind farms. They are favourably compared against persistence and Auto......Wind power production data at temporal resolutions of a few minutes exhibits successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime-switching behaviour......Regressive (AR) models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill....

  14. Adaptive modelling and forecasting of offshore wind power fluctuations with Markov-switching autoregressive models

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2012-01-01

    optimized is based on penalized maximum likelihood, with exponential forgetting of past observations. MSAR models are then employed for one-step-ahead point forecasting of 10 min resolution time series of wind power at two large offshore wind farms. They are favourably compared against persistence......Wind power production data at temporal resolutions of a few minutes exhibit successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime-switching behaviour...... and autoregressive models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill....

  15. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    International Nuclear Information System (INIS)

    Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.

    2017-01-01

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.

  16. Weather Forecasts are for Wimps. Why Water Resource Managers Do Not Use Climate Forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Rayner, S. [James Martin Institute of Science and Civilization, Said Business School, University of Oxford, OX1 1HP (United Kingdom); Lach, D. [Oregon State University, Corvallis, OR, 97331-4501 (United States); Ingram, H. [School of Social Ecology, University of California Irvine, Irvine, CA, 92697-7075 (United States)

    2005-04-15

    Short-term climate forecasting offers the promise of improved hydrologic management strategies. However, water resource managers in the United States have proven reluctant to incorporate them in decision making. While managers usually cite poor reliability of the forecasts as the reason for this, they are seldom able to demonstrate knowledge of the actual performance of forecasts or to consistently articulate the level of reliability that they would require. Analysis of three case studies in California, the Pacific Northwest, and metro Washington DC identifies institutional reasons that appear to lie behind managers reluctance to use the forecasts. These include traditional reliance on large built infrastructure, organizational conservatism and complexity, mismatch of temporal and spatial scales of forecasts to management needs, political disincentives to innovation, and regulatory constraints. The paper concludes that wider acceptance of the forecasts will depend on their being incorporated in existing organizational routines and industrial codes and practices, as well as changes in management incentives to innovation. Finer spatial resolution of forecasts and the regional integration of multi-agency functions would also enhance their usability. The title of this article is taken from an advertising slogan for the Oldsmobile Bravura SUV.

  17. State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy

    Directory of Open Access Journals (Sweden)

    O. Rakovec

    2012-09-01

    Full Text Available This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model. The Ensemble Kalman filter (EnKF is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property.

    Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2, a relatively quickly responding catchment in the Belgian Ardennes. We assess the impact on the forecasted discharge of (1 various sets of the spatially distributed discharge gauges and (2 the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty.

  18. Effects of Cu and Ag additions on age-hardening behavior during multi-step aging in Al--Mg--Si alloys

    International Nuclear Information System (INIS)

    Kim, JaeHwang; Daniel Marioara, Calin; Holmestad, Randi; Kobayashi, Equo; Sato, Tatsuo

    2013-01-01

    Low Cu and Ag additions (≤0.10 at%) were found to strongly affect the age-hardening behavior in Al--Mg--Si alloys with Mg+Si>1.5 at%. The hardness increased during aging at 170 °C and the formation of β ″ precipitates was kinetically accelerated. The activation energy of the formation of the β ″ phase was calculated to 127, 105, 108 and 99 KJmol −1 in the base, Cu-added, Ag-added and Cu--Ag-added alloys, respectively using the Kissinger method. The negative effect of two-step aging caused by the formation of Cluster (1) during natural aging was not overcome by the addition of microalloying elements. However, it was suppressed by the formation of Cluster (2) through a pre-aging at 100 °C. Quantitative analysis of the precipitate microstructure was performed using a transmission electron microscope equipped with a parallel electron energy loss spectrometer for the determination of specimen thickness. The formation of Cluster (2) was found to increase the number density of β ″ precipitates, whereas the formation of Cluster (1) decreased the number density and increased the needle length. The effects of low Cu and Ag additions in combination with multi-step aging are discussed based on microstructure observations and hardness and resistivity measurements.

  19. Preparing for an Uncertain Forecast

    Science.gov (United States)

    Karolak, Eric

    2011-01-01

    Navigating the world of government relations and public policy can be a little like predicting the weather. One can't always be sure what's in store or how it will affect him/her down the road. But there are common patterns and a few basic steps that can help one best prepare for a change in the forecast. Though the forecast is uncertain, early…

  20. Paradigm change in ocean studies: multi-platform observing and forecasting integrated approach in response to science and society needs

    Science.gov (United States)

    Tintoré, Joaquín

    2017-04-01

    -platform approach in ocean observation. Three examples from the integration capabilities of SOCIB facilities will be presented and discussed. First the quasi-continuous high frequency glider monitoring of the Ibiza Channel since 2011, an important biodiversity hot spot and a 'choke' point in the Western Mediterranean circulation, has allowed us to reveal a high frequency variability in the North-South exchanges, with very significant changes (0.8 - 0.9 Sv) occurring over periods of days to week of the same order as the previously known seasonal cycle. HF radar data and model results have also contributed more recently to better describe and understand the variability at small scales. Second, the Alborex/Perseus project multi-platform experiment (e.g., RV catamaran, 2 gliders, 25 drifters, 3 Argo type profilers & satellite data) that focused on submesoscale processes and ecosystem response and carried out in the Alborán Sea in May 2014. Glider results showed significant chlorophyll subduction in areas adjacent to the steep density front with patterns related to vertical motion. Initial dynamical interpretations will be presented. Third and final, I will discuss the key relevance of the data centre to guarantee data interoperability, quality control, availability and distribution for this new approach to ocean observation and forecasting to be really efficient in responding to key scientific state of the art priorities, enhancing technology development and responding to society needs.

  1. Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components

    Science.gov (United States)

    Jung-Kubiak, Cecile (Inventor); Reck, Theodore (Inventor); Chattopadhyay, Goutam (Inventor); Perez, Jose Vicente Siles (Inventor); Lin, Robert H. (Inventor); Mehdi, Imran (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Peralta, Alejandro (Inventor)

    2016-01-01

    A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.

  2. Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel

    International Nuclear Information System (INIS)

    Zhang, Yao; Wang, Jianxue; Luo, Xu

    2015-01-01

    Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods

  3. Fabrication of different pore shapes by multi-step etching technique in ion-irradiated PET membranes

    Science.gov (United States)

    Mo, D.; Liu, J. D.; Duan, J. L.; Yao, H. J.; Latif, H.; Cao, D. L.; Chen, Y. H.; Zhang, S. X.; Zhai, P. F.; Liu, J.

    2014-08-01

    A method for the fabrication of different pore shapes in polyethylene terephthalate (PET)-based track etched membranes (TEMs) is reported. A multi-step etching technique involving etchant variation and track annealing was applied to fabricate different pore shapes in PET membranes. PET foils of 12-μm thickness were irradiated with Bi ions (kinetic energy 9.5 MeV/u, fluence 106 ions/cm2) at the Heavy Ion Research Facility (HIRFL, Lanzhou). The cross-sections of fundamental pore shapes (cylinder, cone, and double cone) were analyzed. Funnel-shaped and pencil-shaped pores were obtained using a two-step etching process. Track annealing was carried out in air at 180 °C for 120 min. After track annealing, the selectivity of the etching process decreased, which resulted in isotropic etching in subsequent etching steps. Rounded cylinder and rounded cone shapes were obtained by introducing a track-annealing step in the etching process. Cup and spherical funnel-shaped pores were fabricated using a three- and four-step etching process, respectively. The described multi-step etching technique provides a controllable method to fabricate new pore shapes in TEMs. Introduction of a variety of pore shapes may improve the separation properties of TEMs and enrich the series of TEM products.

  4. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  5. A Practical Model for Forecasting New Freshman Enrollment during the Application Period.

    Science.gov (United States)

    Paulsen, Michael B.

    1989-01-01

    A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)

  6. Gas demand forecasting by a new artificial intelligent algorithm

    Science.gov (United States)

    Khatibi. B, Vahid; Khatibi, Elham

    2012-01-01

    Energy demand forecasting is a key issue for consumers and generators in all energy markets in the world. This paper presents a new forecasting algorithm for daily gas demand prediction. This algorithm combines a wavelet transform and forecasting models such as multi-layer perceptron (MLP), linear regression or GARCH. The proposed method is applied to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is improved significantly by using the proposed method.

  7. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    NARCIS (Netherlands)

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  8. Load forecasting method considering temperature effect for distribution network

    Directory of Open Access Journals (Sweden)

    Meng Xiao Fang

    2016-01-01

    Full Text Available To improve the accuracy of load forecasting, the temperature factor was introduced into the load forecasting in this paper. This paper analyzed the characteristics of power load variation, and researched the rule of the load with the temperature change. Based on the linear regression analysis, the mathematical model of load forecasting was presented with considering the temperature effect, and the steps of load forecasting were given. Used MATLAB, the temperature regression coefficient was calculated. Using the load forecasting model, the full-day load forecasting and time-sharing load forecasting were carried out. By comparing and analyzing the forecast error, the results showed that the error of time-sharing load forecasting method was small in this paper. The forecasting method is an effective method to improve the accuracy of load forecasting.

  9. SPENT NUCLEAR FUEL NUMBER DENSITIES FOR MULTI-PURPOSE CANISTER CRITICALITY CALCULATIONS

    International Nuclear Information System (INIS)

    D. A. Thomas

    1996-01-01

    The purpose of this analysis is to calculate the number densities for spent nuclear fuel (SNF) to be used in criticality evaluations of the Multi-Purpose Canister (MPC) waste packages. The objective of this analysis is to provide material number density information which will be referenced by future MPC criticality design analyses, such as for those supporting the Conceptual Design Report

  10. Evaluation and Quality Control for the Copernicus Seasonal Forecast Systems

    Science.gov (United States)

    Manubens, N.; Hunter, A.; Bedia, J.; Bretonnière, P. A.; Bhend, J.; Doblas-Reyes, F. J.

    2017-12-01

    The EU funded Copernicus Climate Change Service (C3S) will provide authoritative information about past, current and future climate for a wide range of users, from climate scientists to stakeholders from a wide range of sectors including insurance, energy or transport. It has been recognized that providing information about the products' quality and provenance is paramount to establish trust in the service and allow users to make best use of the available information. This presentation outlines the work being conducted within the Quality Assurance for Multi-model Seasonal Forecast Products project (QA4Seas). The aim of QA4Seas is to develop a strategy for the evaluation and quality control (EQC) of the multi-model seasonal forecasts provided by C3S. First, we present the set of guidelines the data providers must comply with, ensuring the data is fully traceable and harmonized across data sets. Second, we discuss the ongoing work on defining a provenance and metadata model that is able to encode such information, and that can be extended to describe the steps followed to obtain the final verification products such as maps and time series of forecast quality measures. The metadata model is based on the Resource Description Framework W3C standard, being thus extensible and reusable. It benefits from widely adopted vocabularies to describe data provenance and workflows, as well as from expert consensus and community-support for the development of the verification and downscaling specific ontologies. Third, we describe the open source software being developed to generate fully reproducible and certifiable seasonal forecast products, which also attaches provenance and metadata information to the verification measures and enables the user to visually inspect the quality of the C3S products. QA4Seas is seeking collaboration with similar initiatives, as well as extending the discussion to interested parties outside the C3S community to share experiences and establish global

  11. Short-Term Wind Speed Forecasting Using Support Vector Regression Optimized by Cuckoo Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2015-01-01

    Full Text Available This paper develops an effectively intelligent model to forecast short-term wind speed series. A hybrid forecasting technique is proposed based on recurrence plot (RP and optimized support vector regression (SVR. Wind caused by the interaction of meteorological systems makes itself extremely unsteady and difficult to forecast. To understand the wind system, the wind speed series is analyzed using RP. Then, the SVR model is employed to forecast wind speed, in which the input variables are selected by RP, and two crucial parameters, including the penalties factor and gamma of the kernel function RBF, are optimized by various optimization algorithms. Those optimized algorithms are genetic algorithm (GA, particle swarm optimization algorithm (PSO, and cuckoo optimization algorithm (COA. Finally, the optimized SVR models, including COA-SVR, PSO-SVR, and GA-SVR, are evaluated based on some criteria and a hypothesis test. The experimental results show that (1 analysis of RP reveals that wind speed has short-term predictability on a short-term time scale, (2 the performance of the COA-SVR model is superior to that of the PSO-SVR and GA-SVR methods, especially for the jumping samplings, and (3 the COA-SVR method is statistically robust in multi-step-ahead prediction and can be applied to practical wind farm applications.

  12. Metaphase II oocytes from human unilaminar follicles grown in a multi-step culture system.

    Science.gov (United States)

    McLaughlin, M; Albertini, D F; Wallace, W H B; Anderson, R A; Telfer, E E

    2018-03-01

    Can complete oocyte development be achieved from human ovarian tissue containing primordial/unilaminar follicles and grown in vitro in a multi-step culture to meiotic maturation demonstrated by the formation of polar bodies and a Metaphase II spindle? Development of human oocytes from primordial/unilaminar stages to resumption of meiosis (Metaphase II) and emission of a polar body was achieved within a serum free multi-step culture system. Complete development of oocytes in vitro has been achieved in mouse, where in vitro grown (IVG) oocytes from primordial follicles have resulted in the production of live offspring. Human oocytes have been grown in vitro from the secondary/multi-laminar stage to obtain fully grown oocytes capable of meiotic maturation. However, there are no reports of a culture system supporting complete growth from the earliest stages of human follicle development through to Metaphase II. Ovarian cortical biopsies were obtained with informed consent from women undergoing elective caesarean section (mean age: 30.7 ± 1.7; range: 25-39 years, n = 10). Laboratory setting. Ovarian biopsies were dissected into thin strips, and after removal of growing follicles were cultured in serum free medium for 8 days (Step 1). At the end of this period secondary/multi-laminar follicles were dissected from the strips and intact follicles 100-150 μm in diameter were selected for further culture. Isolated follicles were cultured individually in serum free medium in the presence of 100 ng/ml of human recombinant Activin A (Step 2). Individual follicles were monitored and after 8 days, cumulus oocyte complexes (COCs) were retrieved by gentle pressure on the cultured follicles. Complexes with complete cumulus and adherent mural granulosa cells were selected and cultured in the presence of Activin A and FSH on membranes for a further 4 days (Step 3). At the end of Step 3, complexes containing oocytes >100 μm diameter were selected for IVM in SAGE medium (Step 4) then

  13. Case Study: A Real-Time Flood Forecasting System with Predictive Uncertainty Estimation for the Godavari River, India

    Directory of Open Access Journals (Sweden)

    Silvia Barbetta

    2016-10-01

    Full Text Available This work presents the application of the multi-temporal approach of the Model Conditional Processor (MCP-MT for predictive uncertainty (PU estimation in the Godavari River basin, India. MCP-MT is developed for making probabilistic Bayesian decision. It is the most appropriate approach if the uncertainty of future outcomes is to be considered. It yields the best predictive density of future events and allows determining the probability that a critical warning threshold may be exceeded within a given forecast time. In Bayesian decision-making, the predictive density represents the best available knowledge on a future event to address a rational decision-making process. MCP-MT has already been tested for case studies selected in Italian river basins, showing evidence of improvement of the effectiveness of operative real-time flood forecasting systems. The application of MCP-MT for two river reaches selected in the Godavari River basin, India, is here presented and discussed by considering the stage forecasts provided by a deterministic model, STAFOM-RCM, and hourly dataset based on seven monsoon seasons in the period 2001–2010. The results show that the PU estimate is useful for finding the exceedance probability for a given hydrometric threshold as function of the forecast time up to 24 h, demonstrating the potential usefulness for supporting real-time decision-making. Moreover, the expected value provided by MCP-MT yields better results than the deterministic model predictions, with higher Nash–Sutcliffe coefficients and lower error on stage forecasts, both in term of mean error and standard deviation and root mean square error.

  14. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  15. Roles of multi-step transfer in fusion process induced by heavy-ion reactions

    International Nuclear Information System (INIS)

    Imanishi, B.; Oertzen, W. von.

    1993-06-01

    In nucleus-nucleus collisions of the systems, 12 C+ 13 C and 13 C+ 16 O- 12 C+ 17 O, the effects of the multi-step transfers and inelastic excitations on the fusion cross sections are investigated in the framework of the coupled-reaction-channel (CRC) method. Strong CRC effects of the multi-step processes are observed. Namely, the valence neutron in 13 C or 17 O plays an important role in the enhancement of the fusion. The potential barrier is effectively lowered with the formation of the covalent molecule of the configuration, 12 C+n+ 12 C or 12 C+n+ 16 O. In the analyses of the system 12 C+ 13 C, however, it is still required to introduce core-core optical potential of lower barrier height in the state of the positive total parity. This could be due to the neck formation with the nucleons contained in two core nuclei. (author)

  16. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    Science.gov (United States)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  17. Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty

    Science.gov (United States)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge

  18. Forecasting production in Liquid Rich Shale plays

    Science.gov (United States)

    Nikfarman, Hanieh

    Production from Liquid Rich Shale (LRS) reservoirs is taking center stage in the exploration and production of unconventional reservoirs. Production from the low and ultra-low permeability LRS plays is possible only through multi-fractured horizontal wells (MFHW's). There is no existing workflow that is applicable to forecasting multi-phase production from MFHW's in LRS plays. This project presents a practical and rigorous workflow for forecasting multiphase production from MFHW's in LRS reservoirs. There has been much effort in developing workflows and methodology for forecasting in tight/shale plays in recent years. The existing workflows, however, are applicable only to single phase flow, and are primarily used in shale gas plays. These methodologies do not apply to the multi-phase flow that is inevitable in LRS plays. To account for complexities of multiphase flow in MFHW's the only available technique is dynamic modeling in compositional numerical simulators. These are time consuming and not practical when it comes to forecasting production and estimating reserves for a large number of producers. A workflow was developed, and validated by compositional numerical simulation. The workflow honors physics of flow, and is sufficiently accurate while practical so that an analyst can readily apply it to forecast production and estimate reserves in a large number of producers in a short period of time. To simplify the complex multiphase flow in MFHW, the workflow divides production periods into an initial period where large production and pressure declines are expected, and the subsequent period where production decline may converge into a common trend for a number of producers across an area of interest in the field. Initial period assumes the production is dominated by single-phase flow of oil and uses the tri-linear flow model of Erdal Ozkan to estimate the production history. Commercial software readily available can simulate flow and forecast production in this

  19. Development of an Experimental African Drought Monitoring and Seasonal Forecasting System: A First Step towards a Global Drought Information System

    Science.gov (United States)

    Wood, E. F.; Chaney, N.; Sheffield, J.; Yuan, X.

    2012-12-01

    Extreme hydrologic events in the form of droughts are a significant source of social and economic damage. Internationally, organizations such as UNESCO, the Group on Earth Observations (GEO), and the World Climate Research Programme (WCRP) have recognized the need for drought monitoring, especially for the developing world where drought has had devastating impacts on local populations through food insecurity and famine. Having the capacity to monitor droughts in real-time, and to provide drought forecasts with sufficient warning will help developing countries and international programs move from the management of drought crises to the management of drought risk. While observation-based assessments, such as those produced by the US Drought Monitor, are effective for monitoring in countries with extensive observation networks (of precipitation in particular), their utility is lessened in areas (e.g., Africa) where observing networks are sparse. For countries with sparse networks and weak reporting systems, remote sensing observations can provide the real-time data for the monitoring of drought. More importantly, these datasets are now available for at least a decade, which allows for the construction of a climatology against which current conditions can be compared. In this presentation we discuss the development of our multi-lingual experimental African Drought Monitor (ADM) (see http://hydrology.princeton.edu/~nchaney/ADM_ML). At the request of UNESCO, the ADM system has been installed at AGRHYMET, a regional climate and agricultural center in Niamey, Niger and at the ICPAC climate center in Nairobi, Kenya. The ADM system leverages off our U.S. drought monitoring and forecasting system (http://hydrology.princeton.edu/forecasting) that uses the NLDAS data to force the VIC land surface model (LSM) at 1/8th degree spatial resolution for the estimation of our soil moisture drought index (Sheffield et al., 2004). For the seasonal forecast of drought, CFSv2 climate

  20. Four methodologies to improve healthcare demand forecasting.

    Science.gov (United States)

    Côté, M J; Tucker, S L

    2001-05-01

    Forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. This task, which often is assumed by financial managers, first requires the compilation and examination of historical information. Although many quantitative forecasting methods exist, four common methods of forecasting are percent adjustment, 12-month moving average, trendline, and seasonalized forecast. These four methods are all based upon the organization's recent historical demand. Healthcare financial managers who want to project demand for healthcare services in their facility should understand the advantages and disadvantages of each method and then select the method that will best meet the organization's needs.

  1. Can we use Earth Observations to improve monthly water level forecasts?

    Science.gov (United States)

    Slater, L. J.; Villarini, G.

    2017-12-01

    Dynamical-statistical hydrologic forecasting approaches benefit from different strengths in comparison with traditional hydrologic forecasting systems: they are computationally efficient, can integrate and `learn' from a broad selection of input data (e.g., General Circulation Model (GCM) forecasts, Earth Observation time series, teleconnection patterns), and can take advantage of recent progress in machine learning (e.g. multi-model blending, post-processing and ensembling techniques). Recent efforts to develop a dynamical-statistical ensemble approach for forecasting seasonal streamflow using both GCM forecasts and changing land cover have shown promising results over the U.S. Midwest. Here, we use climate forecasts from several GCMs of the North American Multi Model Ensemble (NMME) alongside 15-minute stage time series from the National River Flow Archive (NRFA) and land cover classes extracted from the European Space Agency's Climate Change Initiative 300 m annual Global Land Cover time series. With these data, we conduct systematic long-range probabilistic forecasting of monthly water levels in UK catchments over timescales ranging from one to twelve months ahead. We evaluate the improvement in model fit and model forecasting skill that comes from using land cover classes as predictors in the models. This work opens up new possibilities for combining Earth Observation time series with GCM forecasts to predict a variety of hazards from space using data science techniques.

  2. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  3. Device for forecasting reactor power-up routes

    International Nuclear Information System (INIS)

    Fukuzaki, Takaharu.

    1980-01-01

    Purpose: To improve the reliability and forecasting accuracy for a device forecasting the change of the state on line in BWR type reactors. Constitution: The present state in a nuclear reactor is estimated in a present state judging section based on measuring signals for thermal power, core flow rate, control rod density and the like from the nuclear reactor, and the estimated results are accumulated in an operation result collecting section. While on the other hand, a forecasting section forecasts the future state in the reactor based on the signals from the forecasting condition setting section. The actual result values from the collecting section and the forecasting results are compared to each other. If they are not equal, new setting signals are outputted from the setting section to perform the forecasting again. These procedures are repeated till the difference between the forecast results and the actual result values is minimized, by which accurate forecasting for the state of the reactor is made possible. (Furukawa, Y.)

  4. The multi-step prompt particle emission from fission fragments

    International Nuclear Information System (INIS)

    Zhivopistsev, A.; Oprea, C.; Oprea, I.

    2003-01-01

    The purpose of this work is the study of non-equilibrium high-energy gamma emission from 252 Cf. In the framework of the formalism of statistical multi-step compound processes in nuclear reactions. A relation was found between the shape of the high-energy part of the gamma spectrum and different mechanisms of excitation of the fission fragments. Agreement with experimental data for different groups of fission fragments was obtained. The analysis of the experimental high-energy part of gamma spectra yields information about the mechanism of excitation of fission fragments. The influence of dissipation of the deformation excess on intrinsic excitation of fission fragments was studied. (authors)

  5. The average angular distribution of emitted particles in multi-step compound processes

    International Nuclear Information System (INIS)

    Bonetti, R.; Carlson, B.V.; Hussein, M.S.; Toledo, A.S. de

    1983-05-01

    A simple model for the differential cross-section that describes the angular distribution of emitted particles in heavy-ion induced multi-step compound reactions, is constructed. It is suggested that through a careful analysis of the deviations of the experimental data from the pure Hauser-Feshbach behaviour may shed light on the physical nature of the pre-compound, heavy-ion configuration. (Author) [pt

  6. Diffusion coefficients for periodically induced multi-step persistent walks on regular lattices

    International Nuclear Information System (INIS)

    Gilbert, Thomas; Sanders, David P

    2012-01-01

    We present a generalization of our formalism for the computation of diffusion coefficients of multi-step persistent random walks on regular lattices to walks which include zero-displacement states. This situation is especially relevant to systems where tracer particles move across potential barriers as a result of the action of a periodic forcing whose period sets the timescale between transitions. (paper)

  7. Tailored ramp-loading via shock release of stepped-density reservoirs

    International Nuclear Information System (INIS)

    Prisbrey, Shon T.; Park, Hye-Sook; Remington, Bruce A.; Cavallo, Robert; May, Mark; Pollaine, Stephen M.; Rudd, Robert; Maddox, Brian; Comley, Andrew; Fried, Larry; Blobaum, Kerri; Wallace, Russ; Wilson, Mike; Swift, David; Satcher, Joe; Kalantar, Dan; Perry, Ted; Giraldez, Emilio; Farrell, Michael; Nikroo, Abbas

    2012-01-01

    The concept of a gradient piston drive has been extended from that of a single component reservoir, such as a high explosive, to that of a multi-component reservoir that utilizes low density foams and large shocks to achieve high pressures (∼3.5 mbar) and controlled pressure vs. time profiles on a driven sample. Simulated and experimental drives shaped through the use of multiple component (including carbonized resorcinol formaldehyde and SiO 2 foam) reservoirs are compared. Individual density layers in a multiple component reservoir are shown to correlate with velocity features in the measured drive which enables the ability to tune a pressure drive by adjusting the components of the reservoir. Pre-shot simulations are shown to be in rough agreement with the data, but post-shot simulations involving the use of simulated plasma drives were needed to achieve an exact match. Results from a multiple component reservoir shot (∼3.5 mbar) at the National Ignition Facility are shown.

  8. Assimilating the Future for Better Forecasts and Earlier Warnings

    Science.gov (United States)

    Du, H.; Wheatcroft, E.; Smith, L. A.

    2016-12-01

    Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.

  9. Multi-step ahead nonlinear identification of Lorenz's chaotic system using radial basis neural network with learning by clustering and particle swarm optimization

    International Nuclear Information System (INIS)

    Guerra, Fabio A.; Coelho, Leandro dos S.

    2008-01-01

    An important problem in engineering is the identification of nonlinear systems, among them radial basis function neural networks (RBF-NN) using Gaussian activation functions models, which have received particular attention due to their potential to approximate nonlinear behavior. Several design methods have been proposed for choosing the centers and spread of Gaussian functions and training the RBF-NN. The selection of RBF-NN parameters such as centers, spreads, and weights can be understood as a system identification problem. This paper presents a hybrid training approach based on clustering methods (k-means and c-means) to tune the centers of Gaussian functions used in the hidden layer of RBF-NNs. This design also uses particle swarm optimization (PSO) for centers (local clustering search method) and spread tuning, and the Penrose-Moore pseudoinverse for the adjustment of RBF-NN weight outputs. Simulations involving this RBF-NN design to identify Lorenz's chaotic system indicate that the performance of the proposed method is superior to that of the conventional RBF-NN trained for k-means and the Penrose-Moore pseudoinverse for multi-step ahead forecasting

  10. On probabilistic forecasting of wind power time-series

    DEFF Research Database (Denmark)

    Pinson, Pierre

    power dynamics. In both cases, the model parameters are adaptively and recursively estimated, time-adaptativity being the result of exponential forgetting of past observations. The probabilistic forecasting methodology is applied at the Horns Rev wind farm in Denmark, for 10-minute ahead probabilistic...... forecasting of wind power generation. Probabilistic forecasts generated from the proposed methodology clearly have higher skill than those obtained from a classical Gaussian assumption about wind power predictive densities. Corresponding point forecasts also exhibit significantly lower error criteria....

  11. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  12. Multi-Scale Enviro-HIRLAM Forecasting of Weather and Atmospheric Composition over China and its Megacities

    Science.gov (United States)

    Mahura, Alexander; Amstrup, Bjarne; Nuterman, Roman; Yang, Xiaohua; Baklanov, Alexander

    2017-04-01

    Air pollution is a serious problem in different regions of China and its continuously growing megacities. Information on air quality, and especially, in urbanized areas is important for decision making, emergency response and population. In particular, the metropolitan areas of Shanghai, Beijing, and Pearl River Delta are well known as main regions having serious air pollution problems. The on-line integrated meteorology-chemistry-aerosols Enviro-HIRLAM (Environment - HIgh Resolution Limited Area Model) model adapted for China and selected megacities is applied for forecasting of weather and atmospheric composition (with focus on aerosols). The model system is running in downscaling chain from regional to urban scales at subsequent horizontal resolutions of 15-5-2.5 km. The model setup includes also the urban Building Effects Parameterization module, describing different types of urban districts (industrial commercial, city center, high density and residential) with its own morphological and aerodynamical characteristics. The effects of urbanization are important for atmospheric transport, dispersion, deposition, and chemical transformations, in addition to better quality emission inventories for China and selected urban areas. The Enviro-HIRLAM system provides meteorology and air quality forecasts at regional-subregional-urban scales (China - East China - selected megacities). In particular, such forecasting is important for metropolitan areas, where formation and development of meteorological and chemical/aerosol patterns are especially complex. It also provides information for evaluation impact on selected megacities of China as well as for investigation relationship between air pollution and meteorology.

  13. Using Analog Ensemble to generate spatially downscaled probabilistic wind power forecasts

    Science.gov (United States)

    Delle Monache, L.; Shahriari, M.; Cervone, G.

    2017-12-01

    We use the Analog Ensemble (AnEn) method to generate probabilistic 80-m wind power forecasts. We use data from the NCEP GFS ( 28 km resolution) and NCEP NAM (12 km resolution). We use forecasts data from NAM and GFS, and analysis data from NAM which enables us to: 1) use a lower-resolution model to create higher-resolution forecasts, and 2) use a higher-resolution model to create higher-resolution forecasts. The former essentially increases computing speed and the latter increases forecast accuracy. An aggregated model of the former can be compared against the latter to measure the accuracy of the AnEn spatial downscaling. The AnEn works by taking a deterministic future forecast and comparing it with past forecasts. The model searches for the best matching estimates within the past forecasts and selects the predictand value corresponding to these past forecasts as the ensemble prediction for the future forecast. Our study is based on predicting wind speed and air density at more than 13,000 grid points in the continental US. We run the AnEn model twice: 1) estimating 80-m wind speed by using predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind, 2) estimating air density by using predictors such as temperature, pressure, and relative humidity. We use the air density values to correct the standard wind power curves for different values of air density. The standard deviation of the ensemble members (i.e. ensemble spread) will be used as the degree of difficulty to predict wind power at different locations. The value of the correlation coefficient between the ensemble spread and the forecast error determines the appropriateness of this measure. This measure is prominent for wind farm developers as building wind farms in regions with higher predictability will reduce the real-time risks of operating in the electricity markets.

  14. Development of a multi-sensor based urban discharge forecasting system using remotely sensed data: A case study of extreme rainfall in South Korea

    Science.gov (United States)

    Yoon, Sunkwon; Jang, Sangmin; Park, Kyungwon

    2017-04-01

    Extreme weather due to changing climate is a main source of water-related disasters such as flooding and inundation and its damage will be accelerated somewhere in world wide. To prevent the water-related disasters and mitigate their damage in urban areas in future, we developed a multi-sensor based real-time discharge forecasting system using remotely sensed data such as radar and satellite. We used Communication, Ocean and Meteorological Satellite (COMS) and Korea Meteorological Agency (KMA) weather radar for quantitative precipitation estimation. The Automatic Weather System (AWS) and McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation (MAPLE) were used for verification of rainfall accuracy. The optimal Z-R relation was applied the Tropical Z-R relationship (Z=32R1.65), it has been confirmed that the accuracy is improved in the extreme rainfall events. In addition, the performance of blended multi-sensor combining rainfall was improved in 60mm/h rainfall and more strong heavy rainfall events. Moreover, we adjusted to forecast the urban discharge using Storm Water Management Model (SWMM). Several statistical methods have been used for assessment of model simulation between observed and simulated discharge. In terms of the correlation coefficient and r-squared discharge between observed and forecasted were highly correlated. Based on this study, we captured a possibility of real-time urban discharge forecasting system using remotely sensed data and its utilization for real-time flood warning. Acknowledgement This research was supported by a grant (13AWMP-B066744-01) from Advanced Water Management Research Program (AWMP) funded by Ministry of Land, Infrastructure and Transport (MOLIT) of Korean government.

  15. Application of the largest Lyapunov exponent and non-linear fractal extrapolation algorithm to short-term load forecasting

    International Nuclear Information System (INIS)

    Wang Jianzhou; Jia Ruiling; Zhao Weigang; Wu Jie; Dong Yao

    2012-01-01

    Highlights: ► The maximal predictive step size is determined by the largest Lyapunov exponent. ► A proper forecasting step size is applied to load demand forecasting. ► The improved approach is validated by the actual load demand data. ► Non-linear fractal extrapolation method is compared with three forecasting models. ► Performance of the models is evaluated by three different error measures. - Abstract: Precise short-term load forecasting (STLF) plays a key role in unit commitment, maintenance and economic dispatch problems. Employing a subjective and arbitrary predictive step size is one of the most important factors causing the low forecasting accuracy. To solve this problem, the largest Lyapunov exponent is adopted to estimate the maximal predictive step size so that the step size in the forecasting is no more than this maximal one. In addition, in this paper a seldom used forecasting model, which is based on the non-linear fractal extrapolation (NLFE) algorithm, is considered to develop the accuracy of predictions. The suitability and superiority of the two solutions are illustrated through an application to real load forecasting using New South Wales electricity load data from the Australian National Electricity Market. Meanwhile, three forecasting models: the gray model, the seasonal autoregressive integrated moving average approach and the support vector machine method, which received high approval in STLF, are selected to compare with the NLFE algorithm. Comparison results also show that the NLFE model is outstanding, effective, practical and feasible.

  16. A multi-scale ensemble-based framework for forecasting compound coastal-riverine flooding: The Hackensack-Passaic watershed and Newark Bay

    Science.gov (United States)

    Saleh, F.; Ramaswamy, V.; Wang, Y.; Georgas, N.; Blumberg, A.; Pullen, J.

    2017-12-01

    Estuarine regions can experience compound impacts from coastal storm surge and riverine flooding. The challenges in forecasting flooding in such areas are multi-faceted due to uncertainties associated with meteorological drivers and interactions between hydrological and coastal processes. The objective of this work is to evaluate how uncertainties from meteorological predictions propagate through an ensemble-based flood prediction framework and translate into uncertainties in simulated inundation extents. A multi-scale framework, consisting of hydrologic, coastal and hydrodynamic models, was used to simulate two extreme flood events at the confluence of the Passaic and Hackensack rivers and Newark Bay. The events were Hurricane Irene (2011), a combination of inland flooding and coastal storm surge, and Hurricane Sandy (2012) where coastal storm surge was the dominant component. The hydrodynamic component of the framework was first forced with measured streamflow and ocean water level data to establish baseline inundation extents with the best available forcing data. The coastal and hydrologic models were then forced with meteorological predictions from 21 ensemble members of the Global Ensemble Forecast System (GEFS) to retrospectively represent potential future conditions up to 96 hours prior to the events. Inundation extents produced by the hydrodynamic model, forced with the 95th percentile of the ensemble-based coastal and hydrologic boundary conditions, were in good agreement with baseline conditions for both events. The USGS reanalysis of Hurricane Sandy inundation extents was encapsulated between the 50th and 95th percentile of the forecasted inundation extents, and that of Hurricane Irene was similar but with caveats associated with data availability and reliability. This work highlights the importance of accounting for meteorological uncertainty to represent a range of possible future inundation extents at high resolution (∼m).

  17. Multi-column step-gradient chromatography system for automated ion exchange separations

    International Nuclear Information System (INIS)

    Rucker, T.L.

    1985-01-01

    A multi-column step-gradient chromatography system has been designed to perform automated sequential separations of radionuclides by ion exchange chromatography. The system consists of a digital programmer with automatic stream selection valve, two peristaltic pumps, ten columns, and a fraction collector. The automation allows complicated separations of radionuclides to be made with minimal analyst attention and allows for increased productivity and reduced cost of analyses. Results are reported for test separations on mixtures of radionuclides by the system

  18. Improved Orbit Determination and Forecasts with an Assimilative Tool for Atmospheric Density and Satellite Drag Specification

    Science.gov (United States)

    Crowley, G.; Pilinski, M.; Sutton, E. K.; Codrescu, M.; Fuller-Rowell, T. J.; Matsuo, T.; Fedrizzi, M.; Solomon, S. C.; Qian, L.; Thayer, J. P.

    2016-12-01

    Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by the variability in density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of LEO satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. We describe ongoing work to build a comprehensive nowcast and forecast system for specifying the neutral atmospheric state related to orbital drag conditions. The system outputs include neutral density, winds, temperature, composition, and the satellite drag derived from these parameters. This modeling tool is based on several state-of-the-art coupled models of the thermosphere-ionosphere as well as several empirical models running in real-time and uses assimilative techniques to produce a thermospheric nowcast. This software will also produce 72 hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition and using near real-time and predicted space weather data and indices as the inputs. Features of this technique include: • Satellite drag specifications with errors lower than current models • Altitude coverage up to 1000km • Background state representation using both first principles and empirical models • Assimilation of satellite drag and other datatypes • Real time capability • Ability to produce 72-hour forecasts of the atmospheric state In this paper, we will summarize the model design and assimilative architecture, and present preliminary validation results. Validation results will be presented in the context of satellite orbit errors and compared with several leading atmospheric models including the High Accuracy Satellite Drag Model, which is currently used

  19. Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach

    Science.gov (United States)

    2014-01-01

    Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population

  20. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Directory of Open Access Journals (Sweden)

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  1. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Science.gov (United States)

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  2. Bayesian flood forecasting methods: A review

    Science.gov (United States)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been

  3. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  4. Uncertainties in Forecasting Streamflow using Entropy Theory

    Science.gov (United States)

    Cui, H.; Singh, V. P.

    2017-12-01

    Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.

  5. Using forecast information for storm ride-through control

    DEFF Research Database (Denmark)

    Barahona Garzón, Braulio; Trombe, Pierre-Julien; Vincent, Claire Louise

    2013-01-01

    Using probabilistic forecast information in control algorithms can improve the performance of wind farms during periods of extreme winds. This work presents a wind farm supervisor control concept that uses probabilistic forecast information to ride-through a storm with softer ramps of power. Wind...... speed forecasts are generated with a statistical approach (i.e. time series models). The supervisor control is based on a set of logical rules that consider point forecasts and predictive densities to ramp-down the power of the wind farm before the storm hits. The potential of this supervisor control...

  6. On the role of density and attenuation in 3D multi-parameter visco-acoustic VTI frequency-domain FWI: an OBC case study from the North Sea

    Science.gov (United States)

    Operto, S.; Miniussi, A.

    2018-03-01

    Three-dimensional frequency-domain full waveform inversion (FWI) is applied on North Sea wide-azimuth ocean-bottom cable data at low frequencies (≤ 10 Hz) to jointly update vertical wavespeed, density and quality factor Q in the visco-acoustic VTI approximation. We assess whether density and Q should be viewed as proxy to absorb artefacts resulting from approximate wave physics or are valuable for interpretation in presence of saturated sediments and gas. FWI is performed in the frequency domain to account for attenuation easily. Multi-parameter frequency-domain FWI is efficiently performed with a few discrete frequencies following a multi-scale frequency continuation. However, grouping a few frequencies during each multi-scale step is necessary to mitigate acquisition footprint and match dispersive shallow guided waves. Q and density absorb a significant part of the acquisition footprint hence cleaning the velocity model from this pollution. Low Q perturbations correlate with low velocity zones associated with soft sediments and gas cloud. However, the amplitudes of the Q perturbations show significant variations when the inversion tuning is modified. This dispersion in the Q reconstructions is however not passed on the velocity parameter suggesting that cross-talks between first-order kinematic and second-order dynamic parameters are limited. The density model shows a good match with a well log at shallow depths. Moreover, the impedance built a posteriori from the FWI velocity and density models shows a well-focused image with however local differences with the velocity model near the sea bed where density might have absorbed elastic effects. The FWI models are finally assessed against time-domain synthetic seismogram modelling performed with the same frequency-domain modelling engine used for FWI.

  7. A successful forecast of an El Nino winter

    International Nuclear Information System (INIS)

    Kerr, R.A.

    1992-01-01

    This year, for the first time, weather forecasters used signs of a warming in the tropical Pacific as the basis for a long-range prediction of winter weather patterns across the United States. Now forecasters are talking about the next step: stretching the lead time for such forecasts by a year or more. That seems feasible because although this Pacific warming was unmistakable by the time forecasters at the National Weather Service's Climate Analysis Center (CAC) in Camp Springs, Maryland, issued their winter forecast, the El Nino itself had been predicted almost 2 years in advance by a computer model. Next time around, the CAC may well be listening to the modelers and predicting El Nino-related patterns of warmth and flooding seasons in advance

  8. Hydro-economic assessment of hydrological forecasting systems

    Science.gov (United States)

    Boucher, M.-A.; Tremblay, D.; Delorme, L.; Perreault, L.; Anctil, F.

    2012-01-01

    SummaryAn increasing number of publications show that ensemble hydrological forecasts exhibit good performance when compared to observed streamflow. Many studies also conclude that ensemble forecasts lead to a better performance than deterministic ones. This investigation takes one step further by not only comparing ensemble and deterministic forecasts to observed values, but by employing the forecasts in a stochastic decision-making assistance tool for hydroelectricity production, during a flood event on the Gatineau River in Canada. This allows the comparison between different types of forecasts according to their value in terms of energy, spillage and storage in a reservoir. The motivation for this is to adopt the point of view of an end-user, here a hydroelectricity production society. We show that ensemble forecasts exhibit excellent performances when compared to observations and are also satisfying when involved in operation management for electricity production. Further improvement in terms of productivity can be reached through the use of a simple post-processing method.

  9. Electricity demand and spot price forecasting using evolutionary computation combined with chaotic nonlinear dynamic model

    International Nuclear Information System (INIS)

    Unsihuay-Vila, C.; Zambroni de Souza, A.C.; Marangon-Lima, J.W.; Balestrassi, P.P.

    2010-01-01

    This paper proposes a new hybrid approach based on nonlinear chaotic dynamics and evolutionary strategy to forecast electricity loads and prices. The main idea is to develop a new training or identification stage in a nonlinear chaotic dynamic based predictor. In the training stage five optimal parameters for a chaotic based predictor are searched through an optimization model based on evolutionary strategy. The objective function of the optimization model is the mismatch minimization between the multi-step-ahead forecasting of predictor and observed data such as it is done in identification problems. The first contribution of this paper is that the proposed approach is capable of capturing the complex dynamic of demand and price time series considered resulting in a more accuracy forecasting. The second contribution is that the proposed approach run on-line manner, i.e. the optimal set of parameters and prediction is executed automatically which can be used to prediction in real-time, it is an advantage in comparison with other models, where the choice of their input parameters are carried out off-line, following qualitative/experience-based recipes. A case study of load and price forecasting is presented using data from New England, Alberta, and Spain. A comparison with other methods such as autoregressive integrated moving average (ARIMA) and artificial neural network (ANN) is shown. The results show that the proposed approach provides a more accurate and effective forecasting than ARIMA and ANN methods. (author)

  10. Seasonal prediction for Southern Africa: Maximising the skill from forecast systems

    CSIR Research Space (South Africa)

    Landman, WA

    2012-06-01

    Full Text Available /system development started in early 1990s ? SAWS, UCT, UP, Wits (statistical forecast systems) ? South African Long-Lead Forecast Forum ? SARCOF started in 1997 ? consensus through discussions ? Late 1990s ? started to use AGCMs and post-processing ? At SAWS... Reg1 Reg2 Reg3 Reg4 Reg5 Reg6 Reg7 Reg8 Regions RO C ar ea s Below-Normal Near-Normal Above-Normal Operational Forecast Skill From CONSENSUS discussions Verification over 7 years of consensus forecast production New objective multi...

  11. Multi-step contrast sensitivity gauge

    Science.gov (United States)

    Quintana, Enrico C; Thompson, Kyle R; Moore, David G; Heister, Jack D; Poland, Richard W; Ellegood, John P; Hodges, George K; Prindville, James E

    2014-10-14

    An X-ray contrast sensitivity gauge is described herein. The contrast sensitivity gauge comprises a plurality of steps of varying thicknesses. Each step in the gauge includes a plurality of recesses of differing depths, wherein the depths are a function of the thickness of their respective step. An X-ray image of the gauge is analyzed to determine a contrast-to-noise ratio of a detector employed to generate the image.

  12. Probing the Milky Way electron density using multi-messenger astronomy

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane

    2015-04-01

    Multi-messenger observations of ultra-compact binaries in both gravitational waves and electromagnetic radiation supply highly complementary information, providing new ways of characterizing the internal dynamics of these systems, as well as new probes of the galaxy itself. Electron density models, used in pulsar distance measurements via the electron dispersion measure, are currently not well constrained. Simultaneous radio and gravitational wave observations of pulsars in binaries provide a method of measuring the average electron density along the line of sight to the pulsar, thus giving a new method for constraining current electron density models. We present this method and assess its viability with simulations of the compact binary component of the Milky Way using the public domain binary evolution code, BSE. This work is supported by NASA Award NNX13AM10G.

  13. Multiple-Decrement Compositional Forecasting with the Lee-Carter Model

    OpenAIRE

    Guan, Tianyu

    2014-01-01

    Changes in cause of death patterns have a great impact on health and social care costs paid by government and insurance companies. Unfortunately an overwhelming majority of methods for mortality projections is based on overall mortality with only very few studies focusing on forecasting cause-specific mortality. In this project, our aim is to forecast cause-specific death density with a coherent model. Since cause-specific death density obeys a unit sum constraint, it can be considered as com...

  14. Sensitivity of monthly streamflow forecasts to the quality of rainfall forcing: When do dynamical climate forecasts outperform the Ensemble Streamflow Prediction (ESP) method?

    Science.gov (United States)

    Tanguy, M.; Prudhomme, C.; Harrigan, S.; Smith, K. A.; Parry, S.

    2017-12-01

    Forecasting hydrological extremes is challenging, especially at lead times over 1 month for catchments with limited hydrological memory and variable climates. One simple way to derive monthly or seasonal hydrological forecasts is to use historical climate data to drive hydrological models using the Ensemble Streamflow Prediction (ESP) method. This gives a range of possible future streamflow given known initial hydrologic conditions alone. The degree of skill of ESP depends highly on the forecast initialisation month and catchment type. Using dynamic rainfall forecasts as driving data instead of historical data could potentially improve streamflow predictions. A lot of effort is being invested within the meteorological community to improve these forecasts. However, while recent progress shows promise (e.g. NAO in winter), the skill of these forecasts at monthly to seasonal timescales is generally still limited, and the extent to which they might lead to improved hydrological forecasts is an area of active research. Additionally, these meteorological forecasts are currently being produced at 1 month or seasonal time-steps in the UK, whereas hydrological models require forcings at daily or sub-daily time-steps. Keeping in mind these limitations of available rainfall forecasts, the objectives of this study are to find out (i) how accurate monthly dynamical rainfall forecasts need to be to outperform ESP, and (ii) how the method used to disaggregate monthly rainfall forecasts into daily rainfall time series affects results. For the first objective, synthetic rainfall time series were created by increasingly degrading observed data (proxy for a `perfect forecast') from 0 % to +/-50 % error. For the second objective, three different methods were used to disaggregate monthly rainfall data into daily time series. These were used to force a simple lumped hydrological model (GR4J) to generate streamflow predictions at a one-month lead time for over 300 catchments

  15. Aggregated wind power generation probabilistic forecasting based on particle filter

    International Nuclear Information System (INIS)

    Li, Pai; Guan, Xiaohong; Wu, Jiang

    2015-01-01

    Highlights: • A new method for probabilistic forecasting of aggregated wind power generation. • A dynamic system is established based on a numerical weather prediction model. • The new method handles the non-Gaussian and time-varying wind power uncertainties. • Particle filter is applied to forecast predictive densities of wind generation. - Abstract: Probability distribution of aggregated wind power generation in a region is one of important issues for power system daily operation. This paper presents a novel method to forecast the predictive densities of the aggregated wind power generation from several geographically distributed wind farms, considering the non-Gaussian and non-stationary characteristics in wind power uncertainties. Based on a mesoscale numerical weather prediction model, a dynamic system is established to formulate the relationship between the atmospheric and near-surface wind fields of geographically distributed wind farms. A recursively backtracking framework based on the particle filter is applied to estimate the atmospheric state with the near-surface wind power generation measurements, and to forecast the possible samples of the aggregated wind power generation. The predictive densities of the aggregated wind power generation are then estimated based on these predicted samples by a kernel density estimator. In case studies, the new method presented is tested on a 9 wind farms system in Midwestern United States. The testing results that the new method can provide competitive interval forecasts for the aggregated wind power generation with conventional statistical based models, which validates the effectiveness of the new method

  16. Wind power forecasting accuracy and uncertainty in Finland

    Energy Technology Data Exchange (ETDEWEB)

    Holttinen, H.; Miettinen, J.; Sillanpaeae, S.

    2013-04-15

    forecasts for short horizons like the following hour, more advanced combining techniques than simple average, such as Kalmar filtering or recursive least squares provided better results. Two different uncertainty quantification methods, based on empirical cumulative density function and kernel densities, were analysed for 3 sites. Aggregation of wind power production will not only decrease relative prediction errors, but also decreases the variation and uncertainty of prediction errors. (orig.)

  17. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    International Nuclear Information System (INIS)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  18. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  19. A Software Module for High-Accuracy Calibration of Rings and Cylinders on CMM using Multi-Orientation Techniques (Multi-Step and Reversal methods)

    DEFF Research Database (Denmark)

    Tosello, Guido; De Chiffre, Leonardo

    . The Centre for Geometrical Metrology (CGM) at the Technical University of Denmark takes care of free form measurements, in collaboration with DIMEG, University of Padova, Italy. The present report describes a software module, ROUNDCAL, to be used for high-accuracy calibration of rings and cylinders....... The purpose of the software is to calculate the form error and the least square circle of rings and cylinders by mean of average of pontwise measuring results becoming from so-called multi-orientation techniques (both reversal and multi-step methods) in order to eliminate systematic errors of CMM ....

  20. Volatility Forecast in Crises and Expansions

    Directory of Open Access Journals (Sweden)

    Sergii Pypko

    2015-08-01

    Full Text Available We build a discrete-time non-linear model for volatility forecasting purposes. This model belongs to the class of threshold-autoregressive models, where changes in regimes are governed by past returns. The ability to capture changes in volatility regimes and using more accurate volatility measures allow outperforming other benchmark models, such as linear heterogeneous autoregressive model and GARCH specifications. Finally, we show how to derive closed-form expression for multiple-step-ahead forecasting by exploiting information about the conditional distribution of returns.

  1. Regional air-quality forecasting for the Pacific Northwest using MOPITT/TERRA assimilated carbon monoxide MOZART-4 forecasts as a near real-time boundary condition

    Directory of Open Access Journals (Sweden)

    F. L. Herron-Thorpe

    2012-06-01

    Full Text Available Results from a regional air quality forecast model, AIRPACT-3, were compared to AIRS carbon monoxide column densities for the spring of 2010 over the Pacific Northwest. AIRPACT-3 column densities showed high correlation (R > 0.9 but were significantly biased (~25% with consistent under-predictions for spring months when there is significant transport from Asia. The AIRPACT-3 CO bias relative to AIRS was eliminated by incorporating dynamic boundary conditions derived from NCAR's MOZART forecasts with assimilated MOPITT carbon monoxide. Changes in ozone-related boundary conditions derived from MOZART forecasts are also discussed and found to affect background levels by ± 10 ppb but not found to significantly affect peak ozone surface concentrations.

  2. Mechanical properties of molybdenum-titanium alloys micro-structurally controlled by multi-step internal nitriding

    International Nuclear Information System (INIS)

    Nagae, M.; Yoshio, T.; Takemoto, Y.; Takada, J.; Hiraoka, Y.

    2001-01-01

    Internally nitrided dilute Mo-Ti alloys having a heavily deformed microstructure near the specimen surface were prepared by a novel two-step nitriding process at 1173 to 1773 K in N 2 gas. For the nitrided specimens three-point bend tests were performed at temperatures from 77 to 298 K in order to investigate the effect of microstructure control by internal nitriding on the ductile-to-brittle transition temperature (DBTT) of the alloy Yield strength obtained at 243 K of the specimen maintaining the deformed microstructure by the two-step nitriding was about 1.7 times as much as recrystallized specimen. The specimen subjected to the two-step nitriding was bent more than 90 degree at 243 K, whereas recrystallized specimen was fractured after showing a slight ductility at 243 K. DBTT of the specimen subjected to the two-step nitriding and recrystallized specimen was about 153 K and 203 K, respectively. These results indicate that multi-step internal nitriding is very effective to the improvement in the embrittlement by the recrystallization of molybdenum alloys. (author)

  3. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  4. Single-Step Fabrication of High-Density Microdroplet Arrays of Low-Surface-Tension Liquids.

    Science.gov (United States)

    Feng, Wenqian; Li, Linxian; Du, Xin; Welle, Alexander; Levkin, Pavel A

    2016-04-01

    A facile approach for surface patterning that enables single-step fabrication of high-density arrays of low-surface-tension organic-liquid microdroplets is described. This approach enables miniaturized and parallel high-throughput screenings in organic solvents, formation of homogeneous arrays of hydrophobic nanoparticles, polymer micropads of specific shapes, and polymer microlens arrays. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A novel linear physical model for remote sensing of snow wetness and snow density using the visible and infrared bands

    Science.gov (United States)

    Varade, D. M.; Dikshit, O.

    2017-12-01

    Modeling and forecasting of snowmelt runoff are significant for understanding the hydrological processes in the cryosphere which requires timely information regarding snow physical properties such as liquid water content and density of snow in the topmost layer of the snowpack. Both the seasonal runoffs and avalanche forecasting are vastly dependent on the inherent physical characteristics of the snowpack which are conventionally measured by field surveys in difficult terrains at larger impending costs and manpower. With advances in remote sensing technology and the increase in the availability of satellite data, the frequency and extent of these surveys could see a declining trend in future. In this study, we present a novel approach for estimating snow wetness and snow density using visible and infrared bands that are available with most multi-spectral sensors. We define a trapezoidal feature space based on the spectral reflectance in the near infrared band and the Normalized Differenced Snow Index (NDSI), referred to as NIR-NDSI space, where dry snow and wet snow are observed in the left diagonal upper and lower right corners, respectively. The corresponding pixels are extracted by approximating the dry and wet edges which are used to develop a linear physical model to estimate snow wetness. Snow density is then estimated using the modeled snow wetness. Although the proposed approach has used Sentinel-2 data, it can be extended to incorporate data from other multi-spectral sensors. The estimated values for snow wetness and snow density show a high correlation with respect to in-situ measurements. The proposed model opens a new avenue for remote sensing of snow physical properties using multi-spectral data, which were limited in the literature.

  6. Short-term forecasting of individual household electricity loads with investigating impact of data resolution and forecast horizon

    Directory of Open Access Journals (Sweden)

    Yildiz Baran

    2018-01-01

    households vary significantly across different days; as a result, providing a single model for the entire period may result in limited performance. By the use of a pre-clustering step, similar daily load profiles are grouped together according to their standard deviation, and instead of applying one SMBM for the entire data-set of a particular household, separate SMBMs are applied to each one of the clusters. This preliminary clustering step increases the complexity of the analysis however it results in significant improvements in forecast performance.

  7. Short-term forecasting of individual household electricity loads with investigating impact of data resolution and forecast horizon

    Science.gov (United States)

    Yildiz, Baran; Bilbao, Jose I.; Dore, Jonathon; Sproul, Alistair B.

    2018-05-01

    significantly across different days; as a result, providing a single model for the entire period may result in limited performance. By the use of a pre-clustering step, similar daily load profiles are grouped together according to their standard deviation, and instead of applying one SMBM for the entire data-set of a particular household, separate SMBMs are applied to each one of the clusters. This preliminary clustering step increases the complexity of the analysis however it results in significant improvements in forecast performance.

  8. Generalization of information-based concepts in forecast verification

    Science.gov (United States)

    Tödter, J.; Ahrens, B.

    2012-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.

  9. Effect of density step on stirring properties of a strain flow

    International Nuclear Information System (INIS)

    Gonzalez, M; Paranthoen, P

    2009-01-01

    The influence of steep density gradient on stirring properties of a strain flow is addressed by considering the problem in which an interface separating two regions with different constant densities is stabilized within a stagnation-point flow. The existence of an analytic solution for the two-dimensional incompressible flow field allows the exact derivation of the velocity gradient tensor and of parameters describing the local flow topology. Stirring properties are affected not only by vorticity production and jump of strain intensity at the interface, but also by rotation of strain principal axes resulting from anisotropy of pressure Hessian. The strain persistence parameter, which measures the respective effects of strain and effective rotation (vorticity plus rotation rate of strain basis), reveals a complex structure. In particular, for large values of the density ratio, it indicates dominating effective rotation in a restricted area past the interface. Information on flow structure derived from the Okubo-Weiss parameter, by contrast, is less detailed. The influence of the density step on stirring properties is assessed by the Lagrangian evolution of the gradient of a passive scalar. Even for a moderate density ratio, alignment of the scalar gradient and growth rate of its norm are deeply altered. Past the interface effective rotation indeed drives the scalar gradient to align with a direction determined by the local strain persistence parameter, away from the compressional strain direction. The jump of strain intensity at the interface, however, opposes the lessening effect of the latter mechanism on the growth rate of the scalar gradient norm and promotes the rise of the gradient.

  10. Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems

    Science.gov (United States)

    Geng, Steven M.; Reid, Terry V.

    2016-01-01

    One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.

  11. Optimization of a Multi-Step Procedure for Isolation of Chicken Bone Collagen

    OpenAIRE

    Cansu, ?mran; Boran, G?khan

    2015-01-01

    Chicken bone is not adequately utilized despite its high nutritional value and protein content. Although not a common raw material, chicken bone can be used in many different ways besides manufacturing of collagen products. In this study, a multi-step procedure was optimized to isolate chicken bone collagen for higher yield and quality for manufacture of collagen products. The chemical composition of chicken bone was 2.9% nitrogen corresponding to about 15.6% protein, 9.5% fat, 14.7% mineral ...

  12. Variation of nanopore diameter along porous anodic alumina channels by multi-step anodization.

    Science.gov (United States)

    Lee, Kwang Hong; Lim, Xin Yuan; Wai, Kah Wing; Romanato, Filippo; Wong, Chee Cheong

    2011-02-01

    In order to form tapered nanocapillaries, we investigated a method to vary the nanopore diameter along the porous anodic alumina (PAA) channels using multi-step anodization. By anodizing the aluminum in either single acid (H3PO4) or multi-acid (H2SO4, oxalic acid and H3PO4) with increasing or decreasing voltage, the diameter of the nanopore along the PAA channel can be varied systematically corresponding to the applied voltages. The pore size along the channel can be enlarged or shrunken in the range of 20 nm to 200 nm. Structural engineering of the template along the film growth direction can be achieved by deliberately designing a suitable voltage and electrolyte together with anodization time.

  13. Estimating reliability of degraded system based on the probability density evolution with multi-parameter

    Directory of Open Access Journals (Sweden)

    Jiang Ge

    2017-01-01

    Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.

  14. High speed quantitative digital beta autoradiography using a multi-step avalanche detector and an Apple-II microcomputer

    International Nuclear Information System (INIS)

    Bateman, J.E.; Connolly, J.F.; Stephenson, R.

    1985-04-01

    The development of an electronic, digital beta autoradiography system is described. Using a Multi-Step Avalanche/Multi-Wire Proportional Counter (MSA/MWPC) detector system fitted with delay line readout, high speed digital imaging is demonstrated with sub-millimeter spatial resolution. Good proportionality of observed counting rate relative to the known tritium activity is demonstrated. The application of the system to autoradiography in immunoelectrophoresis, histopathology and DNA sequencing is described. (author)

  15. Status of mineral resources evaluation and forecast

    International Nuclear Information System (INIS)

    Ma Hanfeng; Li Ziying; Luo Yi; Li Shengxiang; Sun Wenpeng

    2007-01-01

    The work of resources evaluation and forecast is a focus to the governments of every country in the world, it is related to the establishment of strategic policy on the national mineral resources. In order to quantitatively evaluate the general potential of uranium resources in China and better forecast uranium deposits, this paper briefly introduces the method of evaluating total amount of mineral resources, especially 6 usual prospective methods which are recommended in international geology comparison programs, as well as principle of usual mineral resources quantitative prediction and its steps. The work history of mineral resources evaluation and forecast is reviewed concisely. Advantages and disadvantages of each method, their application field and condition are also explained briefly. At last, the history of uranium resources evaluation and forecast in China and its status are concisely outlined. (authors)

  16. Communicating uncertainty in hydrological forecasts: mission impossible?

    Science.gov (United States)

    Ramos, Maria-Helena; Mathevet, Thibault; Thielen, Jutta; Pappenberger, Florian

    2010-05-01

    Cascading uncertainty in meteo-hydrological modelling chains for forecasting and integrated flood risk assessment is an essential step to improve the quality of hydrological forecasts. Although the best methodology to quantify the total predictive uncertainty in hydrology is still debated, there is a common agreement that one must avoid uncertainty misrepresentation and miscommunication, as well as misinterpretation of information by users. Several recent studies point out that uncertainty, when properly explained and defined, is no longer unwelcome among emergence response organizations, users of flood risk information and the general public. However, efficient communication of uncertain hydro-meteorological forecasts is far from being a resolved issue. This study focuses on the interpretation and communication of uncertain hydrological forecasts based on (uncertain) meteorological forecasts and (uncertain) rainfall-runoff modelling approaches to decision-makers such as operational hydrologists and water managers in charge of flood warning and scenario-based reservoir operation. An overview of the typical flow of uncertainties and risk-based decisions in hydrological forecasting systems is presented. The challenges related to the extraction of meaningful information from probabilistic forecasts and the test of its usefulness in assisting operational flood forecasting are illustrated with the help of two case-studies: 1) a study on the use and communication of probabilistic flood forecasting within the European Flood Alert System; 2) a case-study on the use of probabilistic forecasts by operational forecasters from the hydroelectricity company EDF in France. These examples show that attention must be paid to initiatives that promote or reinforce the active participation of expert forecasters in the forecasting chain. The practice of face-to-face forecast briefings, focusing on sharing how forecasters interpret, describe and perceive the model output forecasted

  17. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    Science.gov (United States)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  18. Performance assessment of laboratory and field-scale multi-step passive treatment of iron-rich acid mine drainage for design improvement.

    Science.gov (United States)

    Rakotonimaro, Tsiverihasina V; Neculita, Carmen Mihaela; Bussière, Bruno; Genty, Thomas; Zagury, Gérald J

    2018-04-17

    Multi-step passive systems for the treatment of iron-rich acid mine drainage (Fe-rich AMD) perform satisfactorily at the laboratory scale. However, their field-scale application has revealed dissimilarities in performance, particularly with respect to hydraulic parameters. In this study, the assessment of factors potentially responsible for the variations in performance of laboratory and field-scale multi-step systems was undertaken. Three laboratory multi-step treatment scenarios, involving a combination of dispersed alkaline substrate (DAS) units, anoxic dolomitic drains, and passive biochemical reactors (PBRs), were set up in 10.7-L columns. The field-scale treatment consisted of two PBRs separated by a wood ash (WA) reactor. The parameters identified as possibly influencing the performances of the laboratory and field-scale experiments were the following: AMD chemistry (electrical conductivity and Fe and SO 4 2- concentrations), flow rate (Q), and saturated hydraulic conductivity (k sat ). Based on these findings, the design of an efficient passive multi-step treatment system is suggested to consider the following: (1) Fe pretreatment, using materials with high k sat and low HRT. If a PBR is to be used, the Fe load should be PBR/DAS filled with a mixture with at least 20% of neutralizing agent; (3) include Q and k sat (> 10 -3  cm/s) in the long-term prediction. Finally, mesocosm testing is strongly recommended prior to construction of full-scale systems for the treatment of Fe-rich AMD.

  19. A New Multi-Step Iterative Algorithm for Approximating Common Fixed Points of a Finite Family of Multi-Valued Bregman Relatively Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Wiyada Kumam

    2016-05-01

    Full Text Available In this article, we introduce a new multi-step iteration for approximating a common fixed point of a finite class of multi-valued Bregman relatively nonexpansive mappings in the setting of reflexive Banach spaces. We prove a strong convergence theorem for the proposed iterative algorithm under certain hypotheses. Additionally, we also use our results for the solution of variational inequality problems and to find the zero points of maximal monotone operators. The theorems furnished in this work are new and well-established and generalize many well-known recent research works in this field.

  20. Forecasting Electricity Spot Prices Accounting for Wind Power Predictions

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre; Nielsen, Henrik Aalborg

    2013-01-01

    A two-step methodology for forecasting of electricity spot prices is introduced, with focus on the impact of predicted system load and wind power generation. The nonlinear and nonstationary influence of these explanatory variables is accommodated in a first step based on a nonparametric and time...

  1. Feasibility of a three-step magnetic resonance imaging approach for the assessment of hepatic steatosis in an asymptomatic study population

    Energy Technology Data Exchange (ETDEWEB)

    Hetterich, Holger; Bayerl, Christian; Auweter, Sigrid; Ertl-Wagner, Birgit [Ludwig-Maximilian University Hospital, Institute of Clinical Radiology, Munich (Germany); Peters, Annette; Linkohr, Birgit [German Research Center for Environmental Health, Institute of Epidemiology II, Helmholtz Zentrum Muenchen, Neuherberg (Germany); Heier, Margit; Meisinger, Christa [German Research Center for Environmental Health, Institute of Epidemiology II, Helmholtz Zentrum Muenchen, Neuherberg (Germany); Central Hospital of Augsburg, KORA Myocardial Infarction Registry, Augsburg (Germany); Kannengiesser, Stephan A.R. [Siemens Healthcare, Erlangen (Germany); Kramer, Harald [Ludwig-Maximilian University Hospital, Institute of Clinical Radiology, Munich (Germany); University of Wisconsin - Madison, Department of Radiology, Madison, WI (United States); Bamberg, Fabian [Ludwig-Maximilian University Hospital, Institute of Clinical Radiology, Munich (Germany); University Hospital Tuebingen, Department of Radiology, Tuebingen (Germany)

    2016-06-15

    To determine the feasibility of a multi-step magnetic resonance imaging (MRI) approach for comprehensive assessment of hepatic steatosis defined as liver fat content of ≥5 % in an asymptomatic population. The study was approved by the institutional review board and written informed consent of all participants was obtained. Participants of a population-based study cohort underwent a three-step 3-T MRI-based assessment of liver fat. A dual-echo Dixon sequence was performed to identify subjects with hepatic steatosis, followed by a multi-echo Dixon sequence with proton density fat fraction estimation. Finally, single-voxel T2-corrected multi-echo spectroscopy was performed. A total of 215 participants completed the MRI protocol (56.3 % male, average age 57.2 ± 9.4 years). The prevalence of hepatic steatosis was 55 %. Mean liver proton density fat fraction was 9.2 ± 8.5 % by multi-echo Dixon and 9.3 ± 8.6 % by multi-echo spectroscopy (p = 0.51). Dual-echo Dixon overestimated liver fat fraction by 1.4 ± 2.0 % (p < 0.0001). All measurements showed excellent correlations (r ≥ 0.9, p < 0.001). Dual-echo Dixon was highly sensitive for the detection of hepatic steatosis (sensitivity 0.97, NPV 0.96) with good specificity and PPV (0.75 and 0.81, respectively). A multi-step MRI approach may enable rapid and accurate identification of subjects with hepatic steatosis in an asymptomatic population. (orig.)

  2. Feasibility of a three-step magnetic resonance imaging approach for the assessment of hepatic steatosis in an asymptomatic study population

    International Nuclear Information System (INIS)

    Hetterich, Holger; Bayerl, Christian; Auweter, Sigrid; Ertl-Wagner, Birgit; Peters, Annette; Linkohr, Birgit; Heier, Margit; Meisinger, Christa; Kannengiesser, Stephan A.R.; Kramer, Harald; Bamberg, Fabian

    2016-01-01

    To determine the feasibility of a multi-step magnetic resonance imaging (MRI) approach for comprehensive assessment of hepatic steatosis defined as liver fat content of ≥5 % in an asymptomatic population. The study was approved by the institutional review board and written informed consent of all participants was obtained. Participants of a population-based study cohort underwent a three-step 3-T MRI-based assessment of liver fat. A dual-echo Dixon sequence was performed to identify subjects with hepatic steatosis, followed by a multi-echo Dixon sequence with proton density fat fraction estimation. Finally, single-voxel T2-corrected multi-echo spectroscopy was performed. A total of 215 participants completed the MRI protocol (56.3 % male, average age 57.2 ± 9.4 years). The prevalence of hepatic steatosis was 55 %. Mean liver proton density fat fraction was 9.2 ± 8.5 % by multi-echo Dixon and 9.3 ± 8.6 % by multi-echo spectroscopy (p = 0.51). Dual-echo Dixon overestimated liver fat fraction by 1.4 ± 2.0 % (p < 0.0001). All measurements showed excellent correlations (r ≥ 0.9, p < 0.001). Dual-echo Dixon was highly sensitive for the detection of hepatic steatosis (sensitivity 0.97, NPV 0.96) with good specificity and PPV (0.75 and 0.81, respectively). A multi-step MRI approach may enable rapid and accurate identification of subjects with hepatic steatosis in an asymptomatic population. (orig.)

  3. Coupling Recruitment Forecasts with Economics in the Gulf of Maine's American Lobster Fishery

    Science.gov (United States)

    Wahle, R.; Oppenheim, N.; Brady, D. C.; Dayton, A.; Sun, C. H. J.

    2016-02-01

    Accurate predictions of fishery recruitment and landings represent an important goal of fisheries science and management, but linking environmental drivers of fish population dynamics to financial markets remains a challenge. A fundamental step in that process is understanding the environmental drivers of fishery recruitment. American lobster (Homarus americanus) populations of the northwest Atlantic have been undergoing a dramatic surge, mostly driven by increases the Gulf of Maine. Settler-recruit models that track cohorts after larvae settle to the sea bed are proving useful in predicting subsequent fishery recruitment some 5-7 years later. Here we describe new recruitment forecasting models for the lobster fishery at 11 management areas from Southern New England to Atlantic Canada. We use an annual survey of juvenile year-class strength and environmental indicators to parameterize growth and mortality terms in the model. As a consequence of a recent widespread multi-year downturn in larval settlement, our models suggest that the peak in lobster abundance in the Gulf of Maine will be passed in the near future. We also present initial steps in the coupling of forecast data with economic models for the fishery. We anticipate that these models will give stakeholders and policy makers time to consider their management choices for this most valuable of the region's fisheries. Our vision is to couple our forecast model outputs to an economic model that captures the dynamics of market forces in the New England and Canadian Maritime lobster fisheries. It will then be possible to estimate the financial status of the fishery several years in advance. This early warning system could mitigate the adverse effects of a fluctuating fishery on the coastal communities that are perilously dependent upon it.

  4. Coherent modeling and forecasting of mortality patterns for subpopulations using multi-way analysis of compositions: an application to Canadian Provinces and Territories

    DEFF Research Database (Denmark)

    Bergeron Boucher, Marie-Pier; Simonacci, Violetta; Oeppen, James

    2018-01-01

    Mortality levels for subpopulations, such as countries in a region or Provinces within a country, generally change in a similar fashion over time, as a result of common historical experiences in terms of health, culture and economics. Forecasting mortality for such populations should consider...... to Compositional Data Analysis (CoDa) methodology. Compositional data are strictly positive values summing to a constant and represent part of a whole. Life table deaths are compositional by definition as they provide the age composition of deaths per year and sum to the life table radix. In bilinear models...... the use of life table deaths treated as compositions generally leads to less bias forecasts than other commonly used models by not assuming a constant rate of mortality improvement. As a consequence, an extension of this approach to multi-way data is here presented. Specifically, a CoDa adaptation...

  5. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-25

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of the hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.

  6. Forecasting Cryptocurrencies Financial Time Series

    DEFF Research Database (Denmark)

    Catania, Leopoldo; Grassi, Stefano; Ravazzolo, Francesco

    2018-01-01

    This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely...

  7. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    Science.gov (United States)

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-07-01

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  8. Efficient Resources Provisioning Based on Load Forecasting in Cloud

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2014-01-01

    Full Text Available Cloud providers should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate resources in a fine-grained mode according to application’s actual resources demand. The necessary precondition of this strategy is obtaining future load information in advance. We propose a multi-step-ahead load forecasting method, KSwSVR, based on statistical learning theory which is suitable for the complex and dynamic characteristics of the cloud computing environment. It integrates an improved support vector regression algorithm and Kalman smoother. Public trace data taken from multitypes of resources were used to verify its prediction accuracy, stability, and adaptability, comparing with AR, BPNN, and standard SVR. Subsequently, based on the predicted results, a simple and efficient strategy is proposed for resource provisioning. CPU allocation experiment indicated it can effectively reduce resources consumption while meeting service level agreements requirements.

  9. The method of separation for evolutionary spectral density estimation of multi-variate and multi-dimensional non-stationary stochastic processes

    KAUST Repository

    Schillinger, Dominik

    2013-07-01

    The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.

  10. PyForecastTools

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-22

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient of variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.

  11. Towards uncertainty estimation for operational forecast products - a multi-model-ensemble approach for the North Sea and the Baltic Sea

    Science.gov (United States)

    Golbeck, Inga; Li, Xin; Janssen, Frank

    2014-05-01

    Several independent operational ocean models provide forecasts of the ocean state (e.g. sea level, temperature, salinity and ice cover) in the North Sea and the Baltic Sea on a daily basis. These forecasts are the primary source of information for a variety of information and emergency response systems used e.g. to issue sea level warnings or carry out oil drift forecast. The forecasts are of course highly valuable as such, but often suffer from a lack of information on their uncertainty. With the aim of augmenting the existing operational ocean forecasts in the North Sea and the Baltic Sea by a measure of uncertainty a multi-model-ensemble (MME) system for sea surface temperature (SST), sea surface salinity (SSS) and water transports has been set up in the framework of the MyOcean-2 project. Members of MyOcean-2, the NOOS² and HIROMB/BOOS³ communities provide 48h-forecasts serving as inputs. Different variables are processed separately due to their different physical characteristics. Based on the so far collected daily MME products of SST and SSS, a statistical method, Empirical Orthogonal Function (EOF) analysis is applied to assess their spatial and temporal variability. For sea surface currents, progressive vector diagrams at specific points are consulted to estimate the performance of the circulation models especially in hydrodynamic important areas, e.g. inflow/outflow of the Baltic Sea, Norwegian trench and English Channel. For further versions of the MME system, it is planned to extend the MME to other variables like e.g. sea level, ocean currents or ice cover based on the needs of the model providers and their customers. It is also planned to include in-situ data to augment the uncertainty information and for validation purposes. Additionally, weighting methods will be implemented into the MME system to develop more complex uncertainty measures. The methodology used to create the MME will be outlined and different ensemble products will be presented. In

  12. Determination of the structures of small gold clusters on stepped magnesia by density functional calculations.

    Science.gov (United States)

    Damianos, Konstantina; Ferrando, Riccardo

    2012-02-21

    The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012

  13. Multi-step processes in the (d, t) and (d, 3He) reactions on 116Sn and 208Pb targets at Ed = 200 MeV

    International Nuclear Information System (INIS)

    Langevin-Joliot, H.; Van de Wiele, J.; Guillot, J.; Koning, A.J.

    2000-01-01

    The role of multi-step processes in the reactions 116 Sn(d,t), 208 Pb(d,t) and 116 Sn(d, 3 He), previously studied at E d = 200 MeV at forward angles and for relatively low energy transfers, has been investigated. We have performed for the first time multi-step calculations taking into account systematically collective excitations in the second and higher order step inelastic transitions. A calculation code based on the Feshbach, Kerman and Koonin model has been modified to handle explicitly these collective excitations, most important in the forward angle domain. One step double differential pick-up cross sections were built from finite range distorted wave results spread in energy using known or estimated hole state characteristics. It is shown that two-step cross sections calculated using the above method compare rather well with those deduced via coupled channel calculations for the same collective excitations. The multi-step calculations performed up to 6 steps reproduce reasonably well the 115 Sn, 207 Pb and 115 In experimental spectra measured up to E x ∼- 40 MeV and 15 deg. The relative contributions of steps of increasing order to pick-up cross sections at E d = 200 MeV and 150 MeV are discussed. (authors)

  14. Forecasting metal prices: Do forecasters herd?

    DEFF Research Database (Denmark)

    Pierdzioch, C.; Rulke, J. C.; Stadtmann, G.

    2013-01-01

    We analyze more than 20,000 forecasts of nine metal prices at four different forecast horizons. We document that forecasts are heterogeneous and report that anti-herding appears to be a source of this heterogeneity. Forecaster anti-herding reflects strategic interactions among forecasters...

  15. A regime-switching stochastic volatility model for forecasting electricity prices

    DEFF Research Database (Denmark)

    Exterkate, Peter; Knapik, Oskar

    In a recent review paper, Weron (2014) pinpoints several crucial challenges outstanding in the area of electricity price forecasting. This research attempts to address all of them by i) showing the importance of considering fundamental price drivers in modeling, ii) developing new techniques for ...... on explanatory variables. Bayesian inference is explored in order to obtain predictive densities. The main focus of the paper is on shorttime density forecasting in Nord Pool intraday market. We show that the proposed model outperforms several benchmark models at this task....

  16. Forecasting seeing and parameters of long-exposure images by means of ARIMA

    Science.gov (United States)

    Kornilov, Matwey V.

    2016-02-01

    Atmospheric turbulence is the one of the major limiting factors for ground-based astronomical observations. In this paper, the problem of short-term forecasting seeing is discussed. The real data that were obtained by atmospheric optical turbulence (OT) measurements above Mount Shatdzhatmaz in 2007-2013 have been analysed. Linear auto-regressive integrated moving average (ARIMA) models are used for the forecasting. A new procedure for forecasting the image characteristics of direct astronomical observations (central image intensity, full width at half maximum, radius encircling 80 % of the energy) has been proposed. Probability density functions of the forecast of these quantities are 1.5-2 times thinner than the respective unconditional probability density functions. Overall, this study found that the described technique could adequately describe temporal stochastic variations of the OT power.

  17. Forecasting Volatility of USD/MUR Exchange Rate using a GARCH ...

    African Journals Online (AJOL)

    that both distributions may forecast quite well with a slight advantage to the. GARCH(1 ... Financial time series tend to exhibit certain characteristic features such as volatility ... Heteroscedasticity-adjusted MAE to evaluate the forecasts. Chuanga et .... the Student's-t distribution or the GED with the following probability density.

  18. Monthly streamflow forecasting using continuous wavelet and multi-gene genetic programming combination

    Science.gov (United States)

    Hadi, Sinan Jasim; Tombul, Mustafa

    2018-06-01

    Streamflow is an essential component of the hydrologic cycle in the regional and global scale and the main source of fresh water supply. It is highly associated with natural disasters, such as droughts and floods. Therefore, accurate streamflow forecasting is essential. Forecasting streamflow in general and monthly streamflow in particular is a complex process that cannot be handled by data-driven models (DDMs) only and requires pre-processing. Wavelet transformation is a pre-processing technique; however, application of continuous wavelet transformation (CWT) produces many scales that cause deterioration in the performance of any DDM because of the high number of redundant variables. This study proposes multigene genetic programming (MGGP) as a selection tool. After the CWT analysis, it selects important scales to be imposed into the artificial neural network (ANN). A basin located in the southeast of Turkey is selected as case study to prove the forecasting ability of the proposed model. One month ahead downstream flow is used as output, and downstream flow, upstream, rainfall, temperature, and potential evapotranspiration with associated lags are used as inputs. Before modeling, wavelet coherence transformation (WCT) analysis was conducted to analyze the relationship between variables in the time-frequency domain. Several combinations were developed to investigate the effect of the variables on streamflow forecasting. The results indicated a high localized correlation between the streamflow and other variables, especially the upstream. In the models of the standalone layout where the data were entered to ANN and MGGP without CWT, the performance is found poor. In the best-scale layout, where the best scale of the CWT identified as the highest correlated scale is chosen and enters to ANN and MGGP, the performance increased slightly. Using the proposed model, the performance improved dramatically particularly in forecasting the peak values because of the inclusion

  19. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    Science.gov (United States)

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  20. New Aspects of Probabilistic Forecast Verification Using Information Theory

    Science.gov (United States)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  1. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  2. Online probabilistic learning with an ensemble of forecasts

    Science.gov (United States)

    Thorey, Jean; Mallet, Vivien; Chaussin, Christophe

    2016-04-01

    Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).

  3. Overcoming the hurdles of multi-step targeting (MST) for effective radioimmunotherapy of solid tumors

    International Nuclear Information System (INIS)

    Larson, Steven M.; Cheung, Nai-Kong

    2009-01-01

    The 4 specific aims of this project are: (1) Optimization of MST to increase tumor uptake; (2) Antigen heterogeneity; (3) Characterization and reduction of renal uptake; and (4) Validation in vivo of optimized MST targeted therapy. This proposal focussed upon optimizing multistep immune targeting strategies for the treatment of cancer. Two multi-step targeting constructs were explored during this funding period: (1) anti-Tag-72 and (2) anti-GD2.

  4. Fabrication of multi-emitter array of CNT for enhancement of current density

    Energy Technology Data Exchange (ETDEWEB)

    Chouhan, Vijay, E-mail: vchouhan@post.kek.jp [Department of Accelerator Science, Graduate University for Advanced Studies, 1-1 Oho, Tsukuba, Ibaraki (Japan); Noguchi, Tsuneyuki [High Energy Accelerator Research Organization-KEK, 1-1 Oho, Tsukuba, Ibaraki (Japan); Kato, Shigeki [Department of Accelerator Science, Graduate University for Advanced Studies, 1-1 Oho, Tsukuba, Ibaraki (Japan); High Energy Accelerator Research Organization-KEK, 1-1 Oho, Tsukuba, Ibaraki (Japan)

    2011-11-11

    We studied and compared field emission properties of two kinds of emitters of randomly oriented multi-wall carbon nanotubes (MWNTs), viz. continuous film emitter (CFE) and multi-emitter array (MEA). The CFE has a continuous film of MWNTs while the MEA consists of many equidistant small circular emitters. Both types of emitters were prepared by dispersing MWNTs over a titanium (Ti) film (for CFEs) or Ti circular islands (for MEAs) deposited on tantalum (Ta) followed by rooting of MWNTs into the Ti film or the Ti islands at high temperature. Emission properties of both types of emitters were analyzed with changing their emission areas. In case of the CFEs, current density decreased with an increase in emission area whereas consistent current densities were achieved from MEAs with different emission areas. In other words, the total emission current was achieved in proportion to the emission area in the case of MEAs. Additionally a high current density of 22 A/cm{sup 2} was achieved at an electric field of 8 V/{mu}m from MEAs, which was far better than that obtained from CFEs. The high current density in MEAs was attributed to edge effect, in which higher emission current is achieved from the edge of film emitter. The results indicate that the field emission characteristics can be greatly improved if a cathode contains many small equidistant circular emitters instead of a continuous film. The outstanding stability of the CFE and the MEA has been demonstrated for 2100 and 1007 h, respectively.

  5. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    Science.gov (United States)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating

  6. Collision density approach of radiation damage in a multi-species medium

    International Nuclear Information System (INIS)

    Lux, I.; Pazsit, I.

    1981-05-01

    Space-energy dependent foward type equations for the collision densities of energetic atoms in multi-species semi-infinite homogeneous medium are formulated. The introduction of the one-dimensional isotropic forward-backward model of Fermi for the scattering and application of the Laplace transformation with respect to the lethargy variable leads to a linear differential equation system with constant coefficients. This equation system is solved for an arbitrary number of species and relations between the collision densities and defect distributions of the different species are given in the Kinchin Pease model. The case of an alien particle incident on a two-component target is examined in some detail and the sputtering spectra are given numerically. (author)

  7. A hybrid wavelet transform based short-term wind speed forecasting approach.

    Science.gov (United States)

    Wang, Jujie

    2014-01-01

    It is important to improve the accuracy of wind speed forecasting for wind parks management and wind power utilization. In this paper, a novel hybrid approach known as WTT-TNN is proposed for wind speed forecasting. In the first step of the approach, a wavelet transform technique (WTT) is used to decompose wind speed into an approximate scale and several detailed scales. In the second step, a two-hidden-layer neural network (TNN) is used to predict both approximated scale and detailed scales, respectively. In order to find the optimal network architecture, the partial autocorrelation function is adopted to determine the number of neurons in the input layer, and an experimental simulation is made to determine the number of neurons within each hidden layer in the modeling process of TNN. Afterwards, the final prediction value can be obtained by the sum of these prediction results. In this study, a WTT is employed to extract these different patterns of the wind speed and make it easier for forecasting. To evaluate the performance of the proposed approach, it is applied to forecast Hexi Corridor of China's wind speed. Simulation results in four different cases show that the proposed method increases wind speed forecasting accuracy.

  8. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Ren, S [Stanford University, Stanford, CA (United States); Tianjin University, Tianjin (China); Hara, W; Le, Q; Wang, L; Xing, L; Li, R [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2) electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.

  9. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    International Nuclear Information System (INIS)

    Ren, S; Hara, W; Le, Q; Wang, L; Xing, L; Li, R

    2016-01-01

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2) electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.

  10. Hydro-meteorological evaluation of downscaled global ensemble rainfall forecasts

    Science.gov (United States)

    Gaborit, Étienne; Anctil, François; Fortin, Vincent; Pelletier, Geneviève

    2013-04-01

    Ensemble rainfall forecasts are of high interest for decision making, as they provide an explicit and dynamic assessment of the uncertainty in the forecast (Ruiz et al. 2009). However, for hydrological forecasting, their low resolution currently limits their use to large watersheds (Maraun et al. 2010). In order to bridge this gap, various implementations of the statistic-stochastic multi-fractal downscaling technique presented by Perica and Foufoula-Georgiou (1996) were compared, bringing Environment Canada's global ensemble rainfall forecasts from a 100 by 70-km resolution down to 6 by 4-km, while increasing each pixel's rainfall variance and preserving its original mean. For comparison purposes, simpler methods were also implemented such as the bi-linear interpolation, which disaggregates global forecasts without modifying their variance. The downscaled meteorological products were evaluated using different scores and diagrams, from both a meteorological and a hydrological view points. The meteorological evaluation was conducted comparing the forecasted rainfall depths against nine days of observed values taken from Québec City rain gauge database. These 9 days present strong precipitation events occurring during the summer of 2009. For the hydrologic evaluation, the hydrological models SWMM5 and (a modified version of) GR4J were implemented on a small 6 km2 urban catchment located in the Québec City region. Ensemble hydrologic forecasts with a time step of 3 hours were then performed over a 3-months period of the summer of 2010 using the original and downscaled ensemble rainfall forecasts. The most important conclusions of this work are that the overall quality of the forecasts was preserved during the disaggregation procedure and that the disaggregated products using this variance-enhancing method were of similar quality than bi-linear interpolation products. However, variance and dispersion of the different members were, of course, much improved for the

  11. Usefulness of multi-plane dynamic subtraction CT (MPDS-CT) for intracranial high density lesions

    Energy Technology Data Exchange (ETDEWEB)

    Takagi, Ryo; Kumazaki, Tatsuo [Nippon Medical School, Tokyo (Japan)

    1996-02-01

    We present a new CT technique using the high speed CT scanner in detection and evaluation of temporal and spatial contrast enhancement of intracranial high density lesions. A multi-plane dynamic subtraction CT (MPDS-CT) was performed in 21 patients with intracranial high density lesions. These lesions consisted of 10 brain tumors, 7 intracerebral hemorrhages and 4 vascular malformations (2 untreated, 2 post-embolization). Baseline study was first performed, and 5 sequential planes of covering total high density lesions were selected. After obtaining the 5 sequential CT images as mask images, three series of multi-plane dynamic CT were performed for the same 5 planes with an intravenous bolus injection of contrast medium. MPDS-CT images were reconstructed by subtracting dynamic CT images from the mask ones. MPDS-CT were compared with conventional contrast-enhanced CT. MPDS-CT images showed the definite contrast enhancement of high density brain tumors and vascular malformations which were not clearly identified on conventional contrast-enhanced CT images because of calcified or hemorrhagic lesions and embolic materials, enabling us to eliminate enhanced abnormalities with non-enhanced areas such as unusual intracerebral hemorrhages. MPDS-CT will provide us further accurate and objective information and will be greatly helpful for interpreting pathophysiologic condition. (author).

  12. Wind speed forecasting in the central California wind resource area

    Energy Technology Data Exchange (ETDEWEB)

    McCarthy, E.F. [Wind Economics & Technology, Inc., Martinez, CA (United States)

    1997-12-31

    A wind speed forecasting program was implemented in the summer seasons of 1985 - 87 in the Central California Wind Resource Area (WRA). The forecasting program is designed to use either meteorological observations from the WRA and local upper air observations or upper air observations alone to predict the daily average windspeed at two locations. Forecasts are made each morning at 6 AM and are valid for a 24 hour period. Ease of use is a hallmark of the program as the daily forecast can be made using data entered into a programmable HP calculator. The forecasting program was the first step in a process to examine whether the electrical energy output of an entire wind power generation facility or defined subsections of the same facility could be predicted up to 24 hours in advance. Analysis of the results of the summer season program using standard forecast verification techniques show the program has skill over persistence and climatology.

  13. Evaluating Forecasts, Narratives and Policy Using a Test of Invariance

    Directory of Open Access Journals (Sweden)

    Jennifer L. Castle

    2017-09-01

    Full Text Available Economic policy agencies produce forecasts with accompanying narratives, and base policy changes on the resulting anticipated developments in the target variables. Systematic forecast failure, defined as large, persistent deviations of the outturns from the numerical forecasts, can make the associated narrative false, which would in turn question the validity of the entailed policy implementation. We establish when systematic forecast failure entails failure of the accompanying narrative, which we call forediction failure, and when that in turn implies policy invalidity. Most policy regime changes involve location shifts, which can induce forediction failure unless the policy variable is super exogenous in the policy model. We propose a step-indicator saturation test to check in advance for invariance to policy changes. Systematic forecast failure, or a lack of invariance, previously justified by narratives reveals such stories to be economic fiction.

  14. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    Science.gov (United States)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  15. Step Density Profiles in Localized Chains

    Science.gov (United States)

    De Roeck, Wojciech; Dhar, Abhishek; Huveneers, François; Schütz, Marius

    2017-06-01

    We consider two types of strongly disordered one-dimensional Hamiltonian systems coupled to baths (energy or particle reservoirs) at the boundaries: strongly disordered quantum spin chains and disordered classical harmonic oscillators. These systems are believed to exhibit localization, implying in particular that the conductivity decays exponentially in the chain length L. We ask however for the profile of the (very slowly) transported quantity in the steady state. We find that this profile is a step-function, jumping in the middle of the chain from the value set by the left bath to the value set by the right bath. This is confirmed by numerics on a disordered quantum spin chain of 9 spins and on much longer chains of harmonic oscillators. From theoretical arguments, we find that the width of the step grows not faster than √{L}, and we confirm this numerically for harmonic oscillators. In this case, we also observe a drastic breakdown of local equilibrium at the step, resulting in a heavily oscillating temperature profile.

  16. The Contribution of Soil Moisture Information to Forecast Skill: Two Studies

    Science.gov (United States)

    Koster, Randal

    2010-01-01

    This talk briefly describes two recent studies on the impact of soil moisture information on hydrological and meteorological prediction. While the studies utilize soil moisture derived from the integration of large-scale land surface models with observations-based meteorological data, the results directly illustrate the potential usefulness of satellite-derived soil moisture information (e.g., from SMOS and SMAP) for applications in prediction. The first study, the GEWEX- and ClIVAR-sponsored GLACE-2 project, quantifies the contribution of realistic soil moisture initialization to skill in subseasonal forecasts of precipitation and air temperature (out to two months). The multi-model study shows that soil moisture information does indeed contribute skill to the forecasts, particularly for air temperature, and particularly when the initial local soil moisture anomaly is large. Furthermore, the skill contributions tend to be larger where the soil moisture initialization is more accurate, as measured by the density of the observational network contributing to the initialization. The second study focuses on streamflow prediction. The relative contributions of snow and soil moisture initialization to skill in streamflow prediction at seasonal lead, in the absence of knowledge of meteorological anomalies during the forecast period, were quantified with several land surface models using uniquely designed numerical experiments and naturalized streamflow data covering mUltiple decades over the western United States. In several basins, accurate soil moisture initialization is found to contribute significant levels of predictive skill. Depending on the date of forecast issue, the contributions can be significant out to leads of six months. Both studies suggest that improvements in soil moisture initialization would lead to increases in predictive skill. The relevance of SMOS and SMAP satellite-based soil moisture information to prediction are discussed in the context of these

  17. Linear combination of forecasts with numerical adjustment via MINIMAX non-linear programming

    Directory of Open Access Journals (Sweden)

    Jairo Marlon Corrêa

    2016-03-01

    Full Text Available This paper proposes a linear combination of forecasts obtained from three forecasting methods (namely, ARIMA, Exponential Smoothing and Artificial Neural Networks whose adaptive weights are determined via a multi-objective non-linear programming problem, which seeks to minimize, simultaneously, the statistics: MAE, MAPE and MSE. The results achieved by the proposed combination are compared with the traditional approach of linear combinations of forecasts, where the optimum adaptive weights are determined only by minimizing the MSE; with the combination method by arithmetic mean; and with individual methods

  18. Multi-step magnetization of the Ising model on a Shastry-Sutherland lattice: a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Huang, W C; Huo, L; Tian, G; Qian, H R; Gao, X S; Qin, M H; Liu, J-M

    2012-01-01

    The magnetization behaviors and spin configurations of the classical Ising model on a Shastry-Sutherland lattice are investigated using Monte Carlo simulations, in order to understand the fascinating magnetization plateaus observed in TmB 4 and other rare-earth tetraborides. The simulations reproduce the 1/2 magnetization plateau by taking into account the dipole-dipole interaction. In addition, a narrow 2/3 magnetization step at low temperature is predicted in our simulation. The multi-step magnetization can be understood as the consequence of the competitions among the spin-exchange interaction, the dipole-dipole interaction, and the static magnetic energy.

  19. Multi-data reservoir history matching for enhanced reservoir forecasting and uncertainty quantification

    KAUST Repository

    Katterbauer, Klemens

    2015-04-01

    Reservoir simulations and history matching are critical for fine-tuning reservoir production strategies, improving understanding of the subsurface formation, and forecasting remaining reserves. Production data have long been incorporated for adjusting reservoir parameters. However, the sparse spatial sampling of this data set has posed a significant challenge for efficiently reducing uncertainty of reservoir parameters. Seismic, electromagnetic, gravity and InSAR techniques have found widespread applications in enhancing exploration for oil and gas and monitoring reservoirs. These data have however been interpreted and analyzed mostly separately, rarely exploiting the synergy effects that could result from combining them. We present a multi-data ensemble Kalman filter-based history matching framework for the simultaneous incorporation of various reservoir data such as seismic, electromagnetics, gravimetry and InSAR for best possible characterization of the reservoir formation. We apply an ensemble-based sensitivity method to evaluate the impact of each observation on the estimated reservoir parameters. Numerical experiments for different test cases demonstrate considerable matching enhancements when integrating all data sets in the history matching process. Results from the sensitivity analysis further suggest that electromagnetic data exhibit the strongest impact on the matching enhancements due to their strong differentiation between water fronts and hydrocarbons in the test cases.

  20. Linking pedestrian flow characteristics with stepping locomotion

    Science.gov (United States)

    Wang, Jiayue; Boltes, Maik; Seyfried, Armin; Zhang, Jun; Ziemer, Verena; Weng, Wenguo

    2018-06-01

    While properties of human traffic flow are described by speed, density and flow, the locomotion of pedestrian is based on steps. To relate characteristics of human locomotor system with properties of human traffic flow, this paper aims to connect gait characteristics like step length, step frequency, swaying amplitude and synchronization with speed and density and thus to build a ground for advanced pedestrian models. For this aim, observational and experimental study on the single-file movement of pedestrians at different densities is conducted. Methods to measure step length, step frequency, swaying amplitude and step synchronization are proposed by means of trajectories of the head. Mathematical models for the relations of step length or frequency and speed are evaluated. The problem how step length and step duration are influenced by factors like body height and density is investigated. It is shown that the effect of body height on step length and step duration changes with density. Furthermore, two different types of step in-phase synchronization between two successive pedestrians are observed and the influence of step synchronization on step length is examined.

  1. Load forecasting of supermarket refrigeration

    DEFF Research Database (Denmark)

    Rasmussen, Lisa Buth; Bacher, Peder; Madsen, Henrik

    2016-01-01

    methods for predicting the regimes are tested. The dynamic relation between the weather and the load is modeled by simple transfer functions and the non-linearities are described using spline functions. The results are thoroughly evaluated and it is shown that the spline functions are suitable...... for handling the non-linear relations and that after applying an auto-regressive noise model the one-step ahead residuals do not contain further significant information....... in Denmark. Every hour the hourly electrical load for refrigeration is forecasted for the following 42 h. The forecast models are adaptive linear time series models. The model has two regimes; one for opening hours and one for closing hours, this is modeled by a regime switching model and two different...

  2. A General Probabilistic Forecasting Framework for Offshore Wind Power Fluctuations

    Directory of Open Access Journals (Sweden)

    Henrik Madsen

    2012-03-01

    Full Text Available Accurate wind power forecasts highly contribute to the integration of wind power into power systems. The focus of the present study is on large-scale offshore wind farms and the complexity of generating accurate probabilistic forecasts of wind power fluctuations at time-scales of a few minutes. Such complexity is addressed from three perspectives: (i the modeling of a nonlinear and non-stationary stochastic process; (ii the practical implementation of the model we proposed; (iii the gap between working on synthetic data and real world observations. At time-scales of a few minutes, offshore fluctuations are characterized by highly volatile dynamics which are difficult to capture and predict. Due to the lack of adequate on-site meteorological observations to relate these dynamics to meteorological phenomena, we propose a general model formulation based on a statistical approach and historical wind power measurements only. We introduce an advanced Markov Chain Monte Carlo (MCMC estimation method to account for the different features observed in an empirical time series of wind power: autocorrelation, heteroscedasticity and regime-switching. The model we propose is an extension of Markov-Switching Autoregressive (MSAR models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH errors in each regime to cope with the heteroscedasticity. Then, we analyze the predictive power of our model on a one-step ahead exercise of time series sampled over 10 min intervals. Its performances are compared to state-of-the-art models and highlight the interest of including a GARCH specification for density forecasts.

  3. Baking Powder Actuated Centrifugo-Pneumatic Valving for Automation of Multi-Step Bioassays

    Directory of Open Access Journals (Sweden)

    David J. Kinahan

    2016-10-01

    Full Text Available We report a new flow control method for centrifugal microfluidic systems; CO2 is released from on-board stored baking powder upon contact with an ancillary liquid. The elevated pressure generated drives the sample into a dead-end pneumatic chamber sealed by a dissolvable film (DF. This liquid incursion wets and dissolves the DF, thus opening the valve. The activation pressure of the DF valve can be tuned by the geometry of the channel upstream of the DF membrane. Through pneumatic coupling with properly dimensioned disc architecture, we established serial cascading of valves, even at a constant spin rate. Similarly, we demonstrate sequential actuation of valves by dividing the disc into a number of distinct pneumatic chambers (separated by DF membranes. Opening these DFs, typically through arrival of a liquid to that location on a disc, permits pressurization of these chambers. This barrier-based scheme provides robust and strictly ordered valve actuation, which is demonstrated by the automation of a multi-step/multi-reagent DNA-based hybridization assay.

  4. Satellite image analysis and a hybrid ESSS/ANN model to forecast solar irradiance in the tropics

    International Nuclear Information System (INIS)

    Dong, Zibo; Yang, Dazhi; Reindl, Thomas; Walsh, Wilfred M.

    2014-01-01

    Highlights: • Satellite image analysis is performed and cloud cover index is classified using self-organizing maps (SOM). • The ESSS model is used to forecast cloud cover index. • Solar irradiance is estimated using multi-layer perceptron (MLP). • The proposed model shows better accuracy than other investigated models. - Abstract: We forecast hourly solar irradiance time series using satellite image analysis and a hybrid exponential smoothing state space (ESSS) model together with artificial neural networks (ANN). Since cloud cover is the major factor affecting solar irradiance, cloud detection and classification are crucial to forecast solar irradiance. Geostationary satellite images provide cloud information, allowing a cloud cover index to be derived and analysed using self-organizing maps (SOM). Owing to the stochastic nature of cloud generation in tropical regions, the ESSS model is used to forecast cloud cover index. Among different models applied in ANN, we favour the multi-layer perceptron (MLP) to derive solar irradiance based on the cloud cover index. This hybrid model has been used to forecast hourly solar irradiance in Singapore and the technique is found to outperform traditional forecasting models

  5. Application of multi-step direct reaction theory to 14 MeV neutron reaction, 3 (n,. cap alpha. )

    Energy Technology Data Exchange (ETDEWEB)

    Kumabe, I.; Matoba, M.; Fukuda, K. [Kyushu Univ., Fukuoka (Japan). Faculty of Engineering; Ikegami, H.; Muraoka, M [eds.

    1980-01-01

    Multi-step direct-reaction theory proposed by Tamura et al. has been applied to continuous spectra of the 14 MeV (n, ..cap alpha..) reaction with some modifications. Calculated results reproduce well the experimental energy and angular distributions of the 14 MeV (n, ..cap alpha..) reactions.

  6. Multi-time scale Climate Informed Stochastic Hybrid Simulation-Optimization Model (McISH model) for Multi-Purpose Reservoir System

    Science.gov (United States)

    Lu, M.; Lall, U.

    2013-12-01

    decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.

  7. Operational aspects of asynchronous filtering for flood forecasting

    Science.gov (United States)

    Rakovec, O.; Weerts, A. H.; Sumihar, J.; Uijlenhoet, R.

    2015-06-01

    This study investigates the suitability of the asynchronous ensemble Kalman filter (AEnKF) and a partitioned updating scheme for hydrological forecasting. The AEnKF requires forward integration of the model for the analysis and enables assimilation of current and past observations simultaneously at a single analysis step. The results of discharge assimilation into a grid-based hydrological model (using a soil moisture error model) for the Upper Ourthe catchment in the Belgian Ardennes show that including past predictions and observations in the data assimilation method improves the model forecasts. Additionally, we show that elimination of the strongly non-linear relation between the soil moisture storage and assimilated discharge observations from the model update becomes beneficial for improved operational forecasting, which is evaluated using several validation measures.

  8. Operational aspects of asynchronous filtering for hydrological forecasting

    Science.gov (United States)

    Rakovec, O.; Weerts, A. H.; Sumihar, J.; Uijlenhoet, R.

    2015-03-01

    This study investigates the suitability of the Asynchronous Ensemble Kalman Filter (AEnKF) and a partitioned updating scheme for hydrological forecasting. The AEnKF requires forward integration of the model for the analysis and enables assimilation of current and past observations simultaneously at a single analysis step. The results of discharge assimilation into a grid-based hydrological model for the Upper Ourthe catchment in the Belgian Ardennes show that including past predictions and observations in the data assimilation method improves the model forecasts. Additionally, we show that elimination of the strongly non-linear relation between the soil moisture storage and assimilated discharge observations from the model update becomes beneficial for improved operational forecasting, which is evaluated using several validation measures.

  9. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da; Samaan, Nader A.; Makarov, Yuri V.; Huang, Zhenyu

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  10. Monthly forecasting of agricultural pests in Switzerland

    Science.gov (United States)

    Hirschi, M.; Dubrovsky, M.; Spirig, C.; Samietz, J.; Calanca, P.; Weigel, A. P.; Fischer, A. M.; Rotach, M. W.

    2012-04-01

    Given the repercussions of pests and diseases on agricultural production, detailed forecasting tools have been developed to simulate the degree of infestation depending on actual weather conditions. The life cycle of pests is most successfully predicted if the micro-climate of the immediate environment (habitat) of the causative organisms can be simulated. Sub-seasonal pest forecasts therefore require weather information for the relevant habitats and the appropriate time scale. The pest forecasting system SOPRA (www.sopra.info) currently in operation in Switzerland relies on such detailed weather information, using hourly weather observations up to the day the forecast is issued, but only a climatology for the forecasting period. Here, we aim at improving the skill of SOPRA forecasts by transforming the weekly information provided by ECMWF monthly forecasts (MOFCs) into hourly weather series as required for the prediction of upcoming life phases of the codling moth, the major insect pest in apple orchards worldwide. Due to the probabilistic nature of operational monthly forecasts and the limited spatial and temporal resolution, their information needs to be post-processed for use in a pest model. In this study, we developed a statistical downscaling approach for MOFCs that includes the following steps: (i) application of a stochastic weather generator to generate a large pool of daily weather series consistent with the climate at a specific location, (ii) a subsequent re-sampling of weather series from this pool to optimally represent the evolution of the weekly MOFC anomalies, and (iii) a final extension to hourly weather series suitable for the pest forecasting model. Results show a clear improvement in the forecast skill of occurrences of upcoming codling moth life phases when incorporating MOFCs as compared to the operational pest forecasting system. This is true both in terms of root mean squared errors and of the continuous rank probability scores of the

  11. Impact of wind power uncertainty forecasting on the market integration of wind energy in Spain

    International Nuclear Information System (INIS)

    González-Aparicio, I.; Zucker, A.

    2015-01-01

    Highlights: • Reduction wind power forecasting uncertainty for day ahead and intraday markets. • Statistical relationship between total load and wind power generation. • Accurately forecast expected revenues from wind producer’s perspective. - Abstract: The growing share of electricity production from variable renewable energy sources increases the stochastic nature of the power system. This has repercussions on the markets for electricity. Deviations from forecasted production schedules require balancing of a generator’s position within a day. Short term products that are traded on power and/or reserve markets have been developed for this purpose, providing opportunities to actors who can offer flexibility in the short term. The value of flexibility is typically modelled using stochastic scenario extensions of dispatch models which requires, as a first step, understanding the nature of forecast uncertainties. This study provides a new approach for determining the forecast errors of wind power generation in the time period between the closure of the day ahead and the opening of the first intraday session using Spain as an example. The methodology has been developed using time series analysis for the years 2010–2013 to find the explanatory variables of the wind error variability by applying clustering techniques to reduce the range of uncertainty, and regressive techniques to forecast the probability density functions of the intra-day price. This methodology has been tested considering different system actions showing its suitability for developing intra-day bidding strategies and also for the generation of electricity generated from Renewable Energy Sources scenarios. This methodology could help a wind power producer to optimally bid into the intraday market based on more accurate scenarios, increasing their revenues and the system value of wind.

  12. Performance Optimization of a Solar-Driven Multi-Step Irreversible Brayton Cycle Based on a Multi-Objective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmadi Mohammad Hosein

    2016-01-01

    Full Text Available An applicable approach for a multi-step regenerative irreversible Brayton cycle on the basis of thermodynamics and optimization of thermal efficiency and normalized output power is presented in this work. In the present study, thermodynamic analysis and a NSGA II algorithm are coupled to determine the optimum values of thermal efficiency and normalized power output for a Brayton cycle system. Moreover, three well-known decision-making methods are employed to indicate definite answers from the outputs gained from the aforementioned approach. Finally, with the aim of error analysis, the values of the average and maximum error of the results are also calculated.

  13. The development and evaluation of a hydrological seasonal forecast system prototype for predicting spring flood volumes in Swedish rivers

    Science.gov (United States)

    Foster, Kean; Bertacchi Uvo, Cintia; Olsson, Jonas

    2018-05-01

    Hydropower makes up nearly half of Sweden's electrical energy production. However, the distribution of the water resources is not aligned with demand, as most of the inflows to the reservoirs occur during the spring flood period. This means that carefully planned reservoir management is required to help redistribute water resources to ensure optimal production and accurate forecasts of the spring flood volume (SFV) is essential for this. The current operational SFV forecasts use a historical ensemble approach where the HBV model is forced with historical observations of precipitation and temperature. In this work we develop and test a multi-model prototype, building on previous work, and evaluate its ability to forecast the SFV in 84 sub-basins in northern Sweden. The hypothesis explored in this work is that a multi-model seasonal forecast system incorporating different modelling approaches is generally more skilful at forecasting the SFV in snow dominated regions than a forecast system that utilises only one approach. The testing is done using cross-validated hindcasts for the period 1981-2015 and the results are evaluated against both climatology and the current system to determine skill. Both the multi-model methods considered showed skill over the reference forecasts. The version that combined the historical modelling chain, dynamical modelling chain, and statistical modelling chain performed better than the other and was chosen for the prototype. The prototype was able to outperform the current operational system 57 % of the time on average and reduce the error in the SFV by ˜ 6 % across all sub-basins and forecast dates.

  14. A one-step separation of human serum high density lipoproteins 2 and 3 by rate-zonal density gradient ultracentrifugation in a swinging bucket rotor

    NARCIS (Netherlands)

    Groot, P.H.E.; Scheek, L.M.; Havekes, L.; Noort, W.L. van; Hooft, F.M. van 't

    1982-01-01

    A method was developed for the separation of the high density lipoprotein subclasses HDL2 and HDL3 from human serum. Six serum samples are fractionated in a single-step ultracentrifugal procedure using the Beckman (SW-40) swinging bucket rotor. The method is based on a difference in flotation rate

  15. Density prediction and dimensionality reduction of mid-term electricity demand in China: A new semiparametric-based additive model

    International Nuclear Information System (INIS)

    Shao, Zhen; Yang, Shan-Lin; Gao, Fei

    2014-01-01

    Highlights: • A new stationary time series smoothing-based semiparametric model is established. • A novel semiparametric additive model based on piecewise smooth is proposed. • We model the uncertainty of data distribution for mid-term electricity forecasting. • We provide efficient long horizon simulation and extraction for external variables. • We provide stable and accurate density predictions for mid-term electricity demand. - Abstract: Accurate mid-term electricity demand forecasting is critical for efficient electric planning, budgeting and operating decisions. Mid-term electricity demand forecasting is notoriously complicated, since the demand is subject to a range of external drivers, such as climate change, economic development, which will exhibit monthly, seasonal, and annual complex variations. Conventional models are based on the assumption that original data is stable and normally distributed, which is generally insignificant in explaining actual demand pattern. This paper proposes a new semiparametric additive model that, in addition to considering the uncertainty of the data distribution, includes practical discussions covering the applications of the external variables. To effectively detach the multi-dimensional volatility of mid-term demand, a novel piecewise smooth method which allows reduction of the data dimensionality is developed. Besides, a semi-parametric procedure that makes use of bootstrap algorithm for density forecast and model estimation is presented. Two typical cases in China are presented to verify the effectiveness of the proposed methodology. The results suggest that both meteorological and economic variables play a critical role in mid-term electricity consumption prediction in China, while the extracted economic factor is adequate to reveal the potentially complex relationship between electricity consumption and economic fluctuation. Overall, the proposed model can be easily applied to mid-term demand forecasting, and

  16. Forecasting Lightning Threat Using WRF Proxy Fields

    Science.gov (United States)

    McCaul, E. W., Jr.

    2010-01-01

    Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.

  17. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  18. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  19. Simultaneous day-ahead forecasting of electricity price and load in smart grids

    International Nuclear Information System (INIS)

    Shayeghi, H.; Ghasemi, A.; Moradzadeh, M.; Nooshyar, M.

    2015-01-01

    Highlights: • This paper presents a novel MIMO-based support vector machine for forecasting. • Considered uncertainties for better simulation for filtering in input data. • Used LSSVM technique for learning. • Proposed a new modification for standard artificial bee colony algorithm to optimize LSSVM engine. - Abstract: In smart grids, customers are promoted to change their energy consumption patterns by electricity prices. In fact, in this environment, the electricity price and load consumption are highly corrected such that the market participants will have complex model in their decisions to maximize their profit. Although the available forecasting mythologies perform well in electricity market by way of little or no load and price interdependencies, but cannot capture load and price dynamics if they exist. To overcome this shortage, a Multi-Input Multi-Output (MIMO) model is presented which can consider the correlation between electricity price and load. The proposed model consists of three components known as a Wavelet Packet Transform (WPT) to make valuable subsets, Generalized Mutual Information (GMI) to select best input candidate and Least Squares Support Vector Machine (LSSVM) based on MIMO model, called LSSVM-MIMO, to make simultaneous load and price forecasts. Moreover, the LSSVM-MIMO parameters are optimized by a novel Quasi-Oppositional Artificial Bee Colony (QOABC) algorithm. Some forecasting indices based on error factor are considered to evaluate the forecasting accuracy. Simulations carried out on New York Independent System Operator, New South Wales (NSW) and PJM electricity markets data, and showing that the proposed hybrid algorithm has good potential for simultaneous forecasting of electricity price and load

  20. Use of Vertically Integrated Ice in WRF-Based Forecasts of Lightning Threat

    Science.gov (United States)

    McCaul, E. W., jr.; Goodman, S. J.

    2008-01-01

    Previously reported methods of forecasting lightning threat using fields of graupel flux from WRF simulations are extended to include the simulated field of vertically integrated ice within storms. Although the ice integral shows less temporal variability than graupel flux, it provides more areal coverage, and can thus be used to create a lightning forecast that better matches the areal coverage of the lightning threat found in observations of flash extent density. A blended lightning forecast threat can be constructed that retains much of the desirable temporal sensitivity of the graupel flux method, while also incorporating the coverage benefits of the ice integral method. The graupel flux and ice integral fields contributing to the blended forecast are calibrated against observed lightning flash origin density data, based on Lightning Mapping Array observations from a series of case studies chosen to cover a wide range of flash rate conditions. Linear curve fits that pass through the origin are found to be statistically robust for the calibration procedures.

  1. On the relation between forecast precision and trading profitability of financial analysts

    DEFF Research Database (Denmark)

    Marinelli, Carlo; Weissensteiner, Alex

    2014-01-01

    We analyze the relation between earnings forecast accuracy and the expected profitability of financial analysts. Modeling forecast errors with a multivariate normal distribution, a complete characterization of the payoff of each analyst is provided. In particular, closed-form expressions for the ......We analyze the relation between earnings forecast accuracy and the expected profitability of financial analysts. Modeling forecast errors with a multivariate normal distribution, a complete characterization of the payoff of each analyst is provided. In particular, closed-form expressions...... for the probability density function, for the expectation, and, more generally, for moments of all orders are obtained. Our analysis shows that the relationship between forecast precision and trading profitability needs not be monotonic, and that the impact of the correlation between the forecasts on the expected...

  2. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    Science.gov (United States)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  3. Self-Regulated Strategy Development Instruction for Teaching Multi-Step Equations to Middle School Students Struggling in Math

    Science.gov (United States)

    Cuenca-Carlino, Yojanna; Freeman-Green, Shaqwana; Stephenson, Grant W.; Hauth, Clara

    2016-01-01

    Six middle school students identified as having a specific learning disability or at risk for mathematical difficulties were taught how to solve multi-step equations by using the self-regulated strategy development (SRSD) model of instruction. A multiple-probe-across-pairs design was used to evaluate instructional effects. Instruction was provided…

  4. Urban Saturated Power Load Analysis Based on a Novel Combined Forecasting Model

    Directory of Open Access Journals (Sweden)

    Huiru Zhao

    2015-03-01

    Full Text Available Analysis of urban saturated power loads is helpful to coordinate urban power grid construction and economic social development. There are two different kinds of forecasting models: the logistic curve model focuses on the growth law of the data itself, while the multi-dimensional forecasting model considers several influencing factors as the input variables. To improve forecasting performance, a novel combined forecasting model for saturated power load analysis was proposed in this paper, which combined the above two models. Meanwhile, the weights of these two models in the combined forecasting model were optimized by employing a fruit fly optimization algorithm. Using Hubei Province as the example, the effectiveness of the proposed combined forecasting model was verified, demonstrating a higher forecasting accuracy. The analysis result shows that the power load of Hubei Province will reach saturation in 2039, and the annual maximum power load will reach about 78,630 MW. The results obtained from this proposed hybrid urban saturated power load analysis model can serve as a reference for sustainable development for urban power grids, regional economies, and society at large.

  5. Formation of an Integrated Stock Price Forecast Model in Lithuania

    Directory of Open Access Journals (Sweden)

    Audrius Dzikevičius

    2016-12-01

    Full Text Available Technical and fundamental analyses are widely used to forecast stock prices due to lack of knowledge of other modern models and methods such as Residual Income Model, ANN-APGARCH, Support Vector Machine, Probabilistic Neural Network and Genetic Fuzzy Systems. Although stock price forecast models integrating both technical and fundamental analyses are currently used widely, their integration is not justified comprehensively enough. This paper discusses theoretical one-factor and multi-factor stock price forecast models already applied by investors at a global level and determines possibility to create and apply practically a stock price forecast model which integrates fundamental and technical analysis with the reference to the Lithuanian stock market. The research is aimed to determine the relationship between stock prices of the 14 Lithuanian companies listed in the Main List by the Nasdaq OMX Baltic and various fundamental variables. Based on correlation and regression analysis results and application of c-Squared Test, ANOVA method, a general stock price forecast model is generated. This paper discusses practical implications how the developed model can be used to forecast stock prices by individual investors and suggests additional check measures.

  6. Cyclone track forecasting based on satellite images using artificial neural networks

    OpenAIRE

    Kovordanyi, Rita; Roy, Chandan

    2009-01-01

    Many places around the world are exposed to tropical cyclones and associated storm surges. In spite of massive efforts, a great number of people die each year as a result of cyclone events. To mitigate this damage, improved forecasting techniques must be developed. The technique presented here uses artificial neural networks to interpret NOAA-AVHRR satellite images. A multi-layer neural network, resembling the human visual system, was trained to forecast the movement of cyclones based on sate...

  7. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  8. Development of Parallel Code for the Alaska Tsunami Forecast Model

    Science.gov (United States)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  9. Forecast of Piezoelectric Properties of Crystalline Materials from First Principles Calculation

    International Nuclear Information System (INIS)

    Zheng Yanqing; Shi Erwei; Chen Jianjun; Zhang Tao; Song Lixin

    2006-01-01

    In this paper, forecast of piezoelectric tensors are presented. Piezo crystals including quartz, quartz-like crystals, known and novel crystals of langasite-type structure are treated with density-functional perturb theory (DFPT) using plane-wave pseudopotentials method, within the local density approximation (LDA) to the exchange-correlation functional. Compared with experimental results, the ab initio calculation results have quantitative or semi-quantitative accuracy. It is shown that first principles calculation opens a door to the search and design of new piezoelectric material. Further application of first principles calculation to forecast the whole piezoelectric properties are also discussed

  10. Study of ocean red tide multi-parameter monitoring technology based on double-wavelength airborne lidar system

    Science.gov (United States)

    Lin, Hong; Wang, Xinming; Liang, Kun

    2010-10-01

    For monitoring and forecasting of the ocean red tide in real time, a marine environment monitoring technology based on the double-wavelength airborne lidar system is proposed. An airborne lidar is father more efficient than the traditional measure technology by the boat. At the same time, this technology can detect multi-parameter about the ocean red tide by using the double-wavelength lidar.It not only can use the infrared laser to detect the scattering signal under the water and gain the information about the red tise's density and size, but also can use the blue-green laser to detect the Brillouin scattering signal and deduce the temperature and salinity of the seawater.The red tide's density detecting model is firstly established by introducing the concept about the red tide scattering coefficient based on the Mie scattering theory. From the Brillouin scattering theory, the relationship about the blue-green laser's Brillouin scattering frequency shift value and power value with the seawater temperature and salinity is found. Then, the detecting mode1 of the saewater temperature and salinity can be established. The value of the red tide infrared scattering signal is evaluated by the simulation, and therefore the red tide particles' density can be known. At the same time, the blue-green laser's Brillouin scattering frequency shift value and power value are evaluated by simulating, and the temperature and salinity of the seawater can be known. Baed on the multi-parameters, the ocean red tide's growth can be monitored and forecasted.

  11. House Price Forecasts, Forecaster Herding, and the Recent Crisis

    DEFF Research Database (Denmark)

    Stadtmann, Georg; Pierdzioch; Ruelke

    2013-01-01

    We used the Wall Street Journal survey data for the period 2006–2012 to analyze whether forecasts of house prices and housing starts provide evidence of (anti-)herding of forecasters. Forecasts are consistent with herding (anti-herding) of forecasters if forecasts are biased towards (away from) t......) the consensus forecast. We found that anti-herding is prevalent among forecasters of house prices. We also report that, following the recent crisis, the prevalence of forecaster anti-herding seems to have changed over time....

  12. Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting

    Science.gov (United States)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be

  13. A short-term ensemble wind speed forecasting system for wind power applications

    Science.gov (United States)

    Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.

    2011-12-01

    This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.

  14. Time series forecasting based on deep extreme learning machine

    NARCIS (Netherlands)

    Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan

    2017-01-01

    Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in

  15. House Price Forecasts, Forecaster Herding, and the Recent Crisis

    Directory of Open Access Journals (Sweden)

    Christian Pierdzioch

    2012-11-01

    Full Text Available We used the Wall Street Journal survey data for the period 2006–2012 to analyze whether forecasts of house prices and housing starts provide evidence of (anti-herding of forecasters. Forecasts are consistent with herding (anti-herding of forecasters if forecasts are biased towards (away from the consensus forecast. We found that anti-herding is prevalent among forecasters of house prices. We also report that, following the recent crisis, the prevalence of forecaster anti-herding seems to have changed over time.

  16. Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre

    Science.gov (United States)

    Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip

    2013-04-01

    Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in

  17. A positive and multi-element conserving time stepping scheme for biogeochemical processes in marine ecosystem models

    Science.gov (United States)

    Radtke, H.; Burchard, H.

    2015-01-01

    In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.

  18. Using Collar worn Sensors to Forecast Thermal Strain in Military Working Dogs

    Science.gov (United States)

    2016-04-22

    Using Collar-worn Sensors to Forecast Thermal Strain in Military Working Dogs James R. Williamson, Austin R. Hess, Christopher J. Smalt, Delsey M...these estimates for forecasting and monitoring thermal strain is assessed based on performance in out of sample prediction of core temperature (Tc...time step (100 Hz) from the magnitude of the three- dimensional acceleration vector, ai , which is independent of sensor orientation. Next, the

  19. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  20. Detection of Heterogeneous Small Inclusions by a Multi-Step MUSIC Method

    Science.gov (United States)

    Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni

    2014-05-01

    In this contribution the problem of detecting and localizing scatterers with small (in terms of wavelength) cross sections by collecting their scattered field is addressed. The problem is dealt with for a two-dimensional and scalar configuration where the background is given as a two-layered cylindrical medium. More in detail, while scattered field data are taken in the outermost layer, inclusions are embedded within the inner layer. Moreover, the case of heterogeneous inclusions (i.e., having different scattering coefficients) is addressed. As a pertinent applicative context we identify the problem of diagnose concrete pillars in order to detect and locate rebars, ducts and other small in-homogeneities that can populate the interior of the pillar. The nature of inclusions influences the scattering coefficients. For example, the field scattered by rebars is stronger than the one due to ducts. Accordingly, it is expected that the more weakly scattering inclusions can be difficult to be detected as their scattered fields tend to be overwhelmed by those of strong scatterers. In order to circumvent this problem, in this contribution a multi-step MUltiple SIgnal Classification (MUSIC) detection algorithm is adopted [1]. In particular, the first stage aims at detecting rebars. Once rebars have been detected, their positions are exploited to update the Green's function and to subtract the scattered field due to their presence. The procedure is repeated until all the inclusions are detected. The analysis is conducted by numerical experiments for a multi-view/multi-static single-frequency configuration and the synthetic data are generated by a FDTD forward solver. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] R. Solimene, A. Dell'Aversano and G. Leone, "MUSIC algorithms for rebar detection," J. of Geophysics and Engineering, vol. 10, pp. 1

  1. Multi-step resistive switching behavior of Li-doped ZnO resistance random access memory device controlled by compliance current

    International Nuclear Information System (INIS)

    Lin, Chun-Cheng; Tang, Jian-Fu; Su, Hsiu-Hsien; Hong, Cheng-Shong; Huang, Chih-Yu; Chu, Sheng-Yuan

    2016-01-01

    The multi-step resistive switching (RS) behavior of a unipolar Pt/Li 0.06 Zn 0.94 O/Pt resistive random access memory (RRAM) device is investigated. It is found that the RRAM device exhibits normal, 2-, 3-, and 4-step RESET behaviors under different compliance currents. The transport mechanism within the device is investigated by means of current-voltage curves, in-situ transmission electron microscopy, and electrochemical impedance spectroscopy. It is shown that the ion transport mechanism is dominated by Ohmic behavior under low electric fields and the Poole-Frenkel emission effect (normal RS behavior) or Li + ion diffusion (2-, 3-, and 4-step RESET behaviors) under high electric fields.

  2. Rayleigh-Taylor instability in multi-structured spherical targets

    International Nuclear Information System (INIS)

    Gupta, N.K.; Lawande, S.V.

    1986-01-01

    An eigenvalue equation for the exponential growth rate of the Rayleigh-Taylor instability is derived in spherical geometry. The free surface and jump boundary conditions are obtained from the eigenvalue equation. The eigenvalue equation is solved in the cases where the initial fluid density profile has a step function or exponential variation in space and analytical formulae for growth rate of the instability are obtained. The solutions for the step function are generalized for any number N of spherical zones forming an arbitrary fluid density profile. The results of the numerical calculations for N spherical zones are compared with the exact analytical results for exponential fluid density profile with N=10 and a good agreement is observed. The formalism is further used to study the effects of density gradients on Rayleigh-Taylor instability in spherical geometry. Also analytical formulae are presented for a particular case of N=3 and shell targets. The formalism developed here can be used to study the growth of the instability in present day multi-structured shell targets. (author)

  3. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    Science.gov (United States)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  4. A Diagnostics Tool to detect ensemble forecast system anomaly and guide operational decisions

    Science.gov (United States)

    Park, G. H.; Srivastava, A.; Shrestha, E.; Thiemann, M.; Day, G. N.; Draijer, S.

    2017-12-01

    The hydrologic community is moving toward using ensemble forecasts to take uncertainty into account during the decision-making process. The New York City Department of Environmental Protection (DEP) implements several types of ensemble forecasts in their decision-making process: ensemble products for a statistical model (Hirsch and enhanced Hirsch); the National Weather Service (NWS) Advanced Hydrologic Prediction Service (AHPS) forecasts based on the classical Ensemble Streamflow Prediction (ESP) technique; and the new NWS Hydrologic Ensemble Forecasting Service (HEFS) forecasts. To remove structural error and apply the forecasts to additional forecast points, the DEP post processes both the AHPS and the HEFS forecasts. These ensemble forecasts provide mass quantities of complex data, and drawing conclusions from these forecasts is time-consuming and difficult. The complexity of these forecasts also makes it difficult to identify system failures resulting from poor data, missing forecasts, and server breakdowns. To address these issues, we developed a diagnostic tool that summarizes ensemble forecasts and provides additional information such as historical forecast statistics, forecast skill, and model forcing statistics. This additional information highlights the key information that enables operators to evaluate the forecast in real-time, dynamically interact with the data, and review additional statistics, if needed, to make better decisions. We used Bokeh, a Python interactive visualization library, and a multi-database management system to create this interactive tool. This tool compiles and stores data into HTML pages that allows operators to readily analyze the data with built-in user interaction features. This paper will present a brief description of the ensemble forecasts, forecast verification results, and the intended applications for the diagnostic tool.

  5. A new spinning reserve requirement forecast method for deregulated electricity markets

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  6. A new spinning reserve requirement forecast method for deregulated electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Amjady, Nima; Keynia, Farshid [Department of Electrical Engineering, Semnan University, Semnan (Iran)

    2010-06-15

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  7. Fennec dust forecast intercomparison over the Sahara in June 2011

    Directory of Open Access Journals (Sweden)

    J.-P. Chaboureau

    2016-06-01

    Full Text Available In the framework of the Fennec international programme, a field campaign was conducted in June 2011 over the western Sahara. It led to the first observational data set ever obtained that documents the dynamics, thermodynamics and composition of the Saharan atmospheric boundary layer (SABL under the influence of the heat low. In support to the aircraft operation, four dust forecasts were run daily at low and high resolutions with convection-parameterizing and convection-permitting models, respectively. The unique airborne and ground-based data sets allowed the first ever intercomparison of dust forecasts over the western Sahara. At monthly scale, large aerosol optical depths (AODs were forecast over the Sahara, a feature observed by satellite retrievals but with different magnitudes. The AOD intensity was correctly predicted by the high-resolution models, while it was underestimated by the low-resolution models. This was partly because of the generation of strong near-surface wind associated with thunderstorm-related density currents that could only be reproduced by models representing convection explicitly. Such models yield emissions mainly in the afternoon that dominate the total emission over the western fringes of the Adrar des Iforas and the Aïr Mountains in the high-resolution forecasts. Over the western Sahara, where the harmattan contributes up to 80 % of dust emission, all the models were successful in forecasting the deep well-mixed SABL. Some of them, however, missed the large near-surface dust concentration generated by density currents and low-level winds. This feature, observed repeatedly by the airborne lidar, was partly forecast by one high-resolution model only.

  8. Flood forecasting and uncertainty of precipitation forecasts

    International Nuclear Information System (INIS)

    Kobold, Mira; Suselj, Kay

    2004-01-01

    The timely and accurate flood forecasting is essential for the reliable flood warning. The effectiveness of flood warning is dependent on the forecast accuracy of certain physical parameters, such as the peak magnitude of the flood, its timing, location and duration. The conceptual rainfall - runoff models enable the estimation of these parameters and lead to useful operational forecasts. The accurate rainfall is the most important input into hydrological models. The input for the rainfall can be real time rain-gauges data, or weather radar data, or meteorological forecasted precipitation. The torrential nature of streams and fast runoff are characteristic for the most of the Slovenian rivers. Extensive damage is caused almost every year- by rainstorms affecting different regions of Slovenia' The lag time between rainfall and runoff is very short for Slovenian territory and on-line data are used only for now casting. Forecasted precipitations are necessary for hydrological forecast for some days ahead. ECMWF (European Centre for Medium-Range Weather Forecasts) gives general forecast for several days ahead while more detailed precipitation data with limited area ALADIN/Sl model are available for two days ahead. There is a certain degree of uncertainty using such precipitation forecasts based on meteorological models. The variability of precipitation is very high in Slovenia and the uncertainty of ECMWF predicted precipitation is very large for Slovenian territory. ECMWF model can predict precipitation events correctly, but underestimates amount of precipitation in general The average underestimation is about 60% for Slovenian region. The predictions of limited area ALADIN/Si model up to; 48 hours ahead show greater applicability in hydrological forecasting. The hydrological models are sensitive to precipitation input. The deviation of runoff is much bigger than the rainfall deviation. Runoff to rainfall error fraction is about 1.6. If spatial and time distribution

  9. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  10. In-season retail sales forecasting using survival models

    African Journals Online (AJOL)

    Retail sales forecasting, survival analysis, time series analysis, Holt's smoothing .... where fx(t) is the probability density function of the future lifetime, Tx, of a .... Adjustments were made to the shape of the smoothed mortality rates in light of new.

  11. A Bayesian Method for Short-Term Probabilistic Forecasting of Photovoltaic Generation in Smart Grid Operation and Control

    Directory of Open Access Journals (Sweden)

    Gabriella Ferruzzi

    2013-02-01

    Full Text Available A new short-term probabilistic forecasting method is proposed to predict the probability density function of the hourly active power generated by a photovoltaic system. Firstly, the probability density function of the hourly clearness index is forecasted making use of a Bayesian auto regressive time series model; the model takes into account the dependence of the solar radiation on some meteorological variables, such as the cloud cover and humidity. Then, a Monte Carlo simulation procedure is used to evaluate the predictive probability density function of the hourly active power by applying the photovoltaic system model to the random sampling of the clearness index distribution. A numerical application demonstrates the effectiveness and advantages of the proposed forecasting method.

  12. Poisson solvers for self-consistent multi-particle simulations

    International Nuclear Information System (INIS)

    Qiang, J; Paret, S

    2014-01-01

    Self-consistent multi-particle simulation plays an important role in studying beam-beam effects and space charge effects in high-intensity beams. The Poisson equation has to be solved at each time-step based on the particle density distribution in the multi-particle simulation. In this paper, we review a number of numerical methods that can be used to solve the Poisson equation efficiently. The computational complexity of those numerical methods will be O(N log(N)) or O(N) instead of O(N2), where N is the total number of grid points used to solve the Poisson equation

  13. Forecasting Investment Risks in Conditions of Uncertainty

    Directory of Open Access Journals (Sweden)

    Andrenko Elena A.

    2017-04-01

    Full Text Available The article is aimed at studying the topical problem of evaluation and forecasting risks of investment activity of enterprises in conditions of uncertainty. Generalizing the researches on qualitative and quantitative methods for evaluating investment risks has helped to reveal certain shortcomings of the proposed approaches, to note in most of the publications there are no results as to any practical application, and to allocate promising directions. On the basis of the theory of fuzzy sets, a model of forecasting the expected risk has been proposed, making use of the Gauss membership function, which has certain advantages over the multi-angular membership functions. Dependences of investment risk from the parameters characterizing the investment project have been obtained. Using the formulas obtained, the total risk of investing in innovation project depending on the boundary conditions has been defined. As the researched target, index of profitability has been selected. The model provides the potential investors and developers with forecasting possible scenarios of investment process to make informed managerial decisions about the appropriateness of introduction and implementation of a project.

  14. Meteoroid Environment Modeling: the Meteoroid Engineering Model and Shower Forecasting

    Science.gov (United States)

    Moorhead, Althea V.

    2017-01-01

    The meteoroid environment is often divided conceptually into meteor showers plus a sporadic background component. The sporadic complex poses the bulk of the risk to spacecraft, but showers can produce significant short-term enhancements of the meteoroid flux. The Meteoroid Environment Office (MEO) has produced two environment models to handle these cases: the Meteoroid Engineering Model (MEM) and an annual meteor shower forecast. Both MEM and the forecast are used by multiple manned spaceflight projects in their meteoroid risk evaluation, and both tools are being revised to incorporate recent meteor velocity, density, and timing measurements. MEM describes the sporadic meteoroid complex and calculates the flux, speed, and directionality of the meteoroid environment relative to a user-supplied spacecraft trajectory, taking the spacecraft's motion into account. MEM is valid in the inner solar system and offers near-Earth and cis-lunar environments. While the current version of MEM offers a nominal meteoroid environment corresponding to a single meteoroid bulk density, the next version of MEMR3 will offer both flux uncertainties and a density distribution in addition to a revised near-Earth environment. We have updated the near-Earth meteor speed distribution and have made the first determination of uncertainty in this distribution. We have also derived a meteor density distribution from the work of Kikwaya et al. (2011). The annual meteor shower forecast takes the form of a report and data tables that can be used in conjunction with an existing MEM assessment. Fluxes are typically quoted to a constant limiting kinetic energy in order to comport with commonly used ballistic limit equations. For the 2017 annual forecast, the MEO substantially revised the list of showers and their characteristics using 14 years of meteor flux measurements from the Canadian Meteor Orbit Radar (CMOR). Defunct or insignificant showers were removed and the temporal profiles of many showers

  15. Effects of the multi-step activation process on the carrier concentration of p-type GaN

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae-Kwan [Department of Materials Science and Metallurgical Engineering, Sunchon National University, Sunchon, Chonnam 540-742 (Korea, Republic of); Jeon, Seong-Ran [LED Research and Business Division, Korea Photonics Technology Institute, Gwanju 500-779 (Korea, Republic of); Lee, Ji-Myon, E-mail: jimlee@sunchon.ac.kr [Department of Printed Electronics Engineering, Sunchon National University, Sunchon, Chonnam 540-742 (Korea, Republic of)

    2014-06-25

    Highlights: • Hole concentration of p-GaN was enhanced by multi-step activation process. • The O{sub 2} plasma treatment is attributed to the enhanced hole concentration of p-GaN. • PL peak intensity was also enhanced by MS activation process. - Abstract: A multi-step activation method, which include an oxygen plasma treatment, chemical treatment, and post annealing in N{sub 2} was proposed to enhance the hole concentration of a p-type GaN epitaxial layer. This process was found to effectively activate p-GaN by increasing the hole concentration compared to that of the conventionally annealed sample. After the optimal oxygen plasma treatment (10 min at a source and table power of 500 W and 100 W, respectively), followed by a HCl and buffered oxide etchant treatment, and then by a post-RTA process in a N{sub 2} environment, the hole concentration was increased from 4.0 × 10{sup 17} to 2.0 × 10{sup 18} cm{sup −3}. The oxygen plasma was found to effectively remove the remaining H atoms and subsequent wet treatment can effectively remove the GaO{sub x} that had formed during O plasma treatment, resulting in the higher intensity of photoluminescence.

  16. Influence of multi-step heat treatments in creep age forming of 7075 aluminum alloy: Optimization for springback, strength and exfoliation corrosion

    Energy Technology Data Exchange (ETDEWEB)

    Arabi Jeshvaghani, R.; Zohdi, H. [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Shahverdi, H.R., E-mail: shahverdi@modares.ac.ir [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Bozorg, M. [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Hadavi, S.M.M. [School of Materials Science and Engineering, MA University of Technology, P.O. Box 16765-3197, Tehran (Iran, Islamic Republic of)

    2012-11-15

    Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).

  17. Against all odds -- Probabilistic forecasts and decision making

    Science.gov (United States)

    Liechti, Katharina; Zappa, Massimiliano

    2015-04-01

    In the city of Zurich (Switzerland) the setting is such that the damage potential due to flooding of the river Sihl is estimated to about 5 billion US dollars. The flood forecasting system that is used by the administration for decision making runs continuously since 2007. It has a time horizon of max. five days and operates at hourly time steps. The flood forecasting system includes three different model chains. Two of those are run by the deterministic NWP models COSMO-2 and COSMO-7 and one is driven by the probabilistic NWP COSMO-Leps. The model chains are consistent since February 2010, so five full years are available for the evaluation for the system. The system was evaluated continuously and is a very nice example to present the added value that lies in probabilistic forecasts. The forecasts are available on an online-platform to the decision makers. Several graphical representations of the forecasts and forecast-history are available to support decision making and to rate the current situation. The communication between forecasters and decision-makers is quite close. To put it short, an ideal situation. However, an event or better put a non-event in summer 2014 showed that the knowledge about the general superiority of probabilistic forecasts doesn't necessarily mean that the decisions taken in a specific situation will be based on that probabilistic forecast. Some years of experience allow gaining confidence in the system, both for the forecasters and for the decision-makers. Even if from the theoretical point of view the handling during crisis situation is well designed, a first event demonstrated that the dialog with the decision-makers still lacks of exercise during such situations. We argue, that a false alarm is a needed experience to consolidate real-time emergency procedures relying on ensemble predictions. A missed event would probably also fit, but, in our case, we are very happy not to report about this option.

  18. iFLOOD: A Real Time Flood Forecast System for Total Water Modeling in the National Capital Region

    Science.gov (United States)

    Sumi, S. J.; Ferreira, C.

    2017-12-01

    Extreme flood events are the costliest natural hazards impacting the US and frequently cause extensive damages to infrastructure, disruption to economy and loss of lives. In 2016, Hurricane Matthew brought severe damage to South Carolina and demonstrated the importance of accurate flood hazard predictions that requires the integration of riverine and coastal model forecasts for total water prediction in coastal and tidal areas. The National Weather Service (NWS) and the National Ocean Service (NOS) provide flood forecasts for almost the entire US, still there are service-gap areas in tidal regions where no official flood forecast is available. The National capital region is vulnerable to multi-flood hazards including high flows from annual inland precipitation events and surge driven coastal inundation along the tidal Potomac River. Predicting flood levels on such tidal areas in river-estuarine zone is extremely challenging. The main objective of this study is to develop the next generation of flood forecast systems capable of providing accurate and timely information to support emergency management and response in areas impacted by multi-flood hazards. This forecast system is capable of simulating flood levels in the Potomac and Anacostia River incorporating the effects of riverine flooding from the upstream basins, urban storm water and tidal oscillations from the Chesapeake Bay. Flood forecast models developed so far have been using riverine data to simulate water levels for Potomac River. Therefore, the idea is to use forecasted storm surge data from a coastal model as boundary condition of this system. Final output of this validated model will capture the water behavior in river-estuary transition zone far better than the one with riverine data only. The challenge for this iFLOOD forecast system is to understand the complex dynamics of multi-flood hazards caused by storm surges, riverine flow, tidal oscillation and urban storm water. Automated system

  19. Multi-step resistive switching behavior of Li-doped ZnO resistance random access memory device controlled by compliance current

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chun-Cheng [Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan (China); Department of Mathematic and Physical Sciences, R.O.C. Air Force Academy, Kaohsiung 820, Taiwan (China); Tang, Jian-Fu; Su, Hsiu-Hsien [Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan (China); Hong, Cheng-Shong; Huang, Chih-Yu [Department of Electronic Engineering, National Kaohsiung Normal University, Kaohsiung 802, Taiwan (China); Chu, Sheng-Yuan, E-mail: chusy@mail.ncku.edu.tw [Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan (China); Center for Micro/Nano Science and Technology, National Cheng Kung University, Tainan 701, Taiwan (China)

    2016-06-28

    The multi-step resistive switching (RS) behavior of a unipolar Pt/Li{sub 0.06}Zn{sub 0.94}O/Pt resistive random access memory (RRAM) device is investigated. It is found that the RRAM device exhibits normal, 2-, 3-, and 4-step RESET behaviors under different compliance currents. The transport mechanism within the device is investigated by means of current-voltage curves, in-situ transmission electron microscopy, and electrochemical impedance spectroscopy. It is shown that the ion transport mechanism is dominated by Ohmic behavior under low electric fields and the Poole-Frenkel emission effect (normal RS behavior) or Li{sup +} ion diffusion (2-, 3-, and 4-step RESET behaviors) under high electric fields.

  20. THE STUDY OF THE FORECASTING PROCESS INFRASTRUCTURAL SUPPORT BUSINESS

    Directory of Open Access Journals (Sweden)

    E. V. Sibirskaia

    2014-01-01

    Full Text Available Summary. When forecasting the necessary infrastructural support entrepreneurship predict rational distribution of the potential and expected results based on capacity development component of infrastructural maintenance, efficient use of resources, expertise and development of regional economies, the rationalization of administrative decisions, etc. According to the authors, the process of predicting business infrastructure software includes the following steps: analysis of the existing infrastructure support business to the top of the forecast period, the structure of resources, identifying disparities, their causes, identifying positive trends in the analysis and the results of research; research component of infrastructural support entrepreneurship, assesses complex system of social relations, institutions, structures and objects made findings and conclusions of the study; identification of areas of strategic change and the possibility of eliminating weaknesses and imbalances, identifying prospects for the development of entrepreneurship; identifying a set of factors and conditions affecting each component of infrastructure software, calculated the degree of influence of each of them and the total effect of all factors; adjustment indicators infrastructure forecasts. Research of views of category says a method of strategic planning and forecasting that methods of strategic planning are considered separately from forecasting methods. In a combination methods of strategic planning and forecasting, in relation to infrastructure ensuring business activity aren't given in literature. Nevertheless, authors consider that this category should be defined for the characteristic of the intrinsic and substantial nature of strategic planning and forecasting of infrastructure ensuring business activity.processing.

  1. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  2. Hourly weather forecasts for gas turbine power generation

    Directory of Open Access Journals (Sweden)

    G. Giunta

    2017-06-01

    Full Text Available An hourly short-term weather forecast can optimize processes in Combined Cycle Gas Turbine (CCGT plants by helping to reduce imbalance charges on the national power grid. Consequently, a reliable meteorological prediction for a given power plant is crucial for obtaining competitive prices for the electric market, better planning and stock management, sales and supplies of energy sources. The paper discusses the short-term hourly temperature forecasts, at lead time day+1 and day+2, over a period of thirteen months in 2012 and 2013 for six Italian CCGT power plants of 390 MW each (260 MW from the gas turbine and 130 MW from the steam turbine. These CCGT plants are placed in three different Italian climate areas: the Po Valley, the Adriatic coast, and the North Tyrrhenian coast. The meteorological model applied in this study is the eni-Kassandra Meteo Forecast (e‑kmf™, a multi-model approach system to provide probabilistic forecasts with a Kalman filter used to improve accuracy of local temperature predictions. Performance skill scores, computed by the output data of the meteorological model, are compared with local observations, and used to evaluate forecast reliability. In the study, the approach has shown good overall scores encompassing more than 50,000 hourly temperature values. Some differences from one site to another, due to local meteorological phenomena, can affect the short-term forecast performance, with consequent impacts on gas-to-power production and related negative imbalances. For operational application of the methodology in CCGT power plant, the benefits and limits have been successfully identified.

  3. Effects of Stroke on Ipsilesional End-Effector Kinematics in a Multi-Step Activity of Daily Living

    OpenAIRE

    Gulde, Philipp; Hughes, Charmayne Mary Lee; Hermsdörfer, Joachim

    2017-01-01

    Background: Stroke frequently impairs activities of daily living (ADL) and deteriorates the function of the contra- as well as the ipsilesional limbs. In order to analyze alterations of higher motor control unaffected by paresis or sensory loss, the kinematics of ipsilesional upper limb movements in patients with stroke has previously been analyzed during prehensile movements and simple tool use actions. By contrast, motion recording of multi-step ADL is rare and patient-control comparisons f...

  4. Microarc oxidation coating covered Ti implants with micro-scale gouges formed by a multi-step treatment for improving osseointegration.

    Science.gov (United States)

    Bai, Yixin; Zhou, Rui; Cao, Jianyun; Wei, Daqing; Du, Qing; Li, Baoqiang; Wang, Yaming; Jia, Dechang; Zhou, Yu

    2017-07-01

    The sub-microporous microarc oxidation (MAO) coating covered Ti implant with micro-scale gouges has been fabricated via a multi-step MAO process to overcome the compromised bone-implant integration. The as-prepared implant has been further mediated by post-heat treatment to compare the effects of -OH functional group and the nano-scale orange peel-like morphology on osseointegration. The bone regeneration, bone-implant contact interface, and biomechanical push-out force of the modified Ti implant have been discussed thoroughly in this work. The greatly improved push-out force for the MAO coated Ti implants with micro-scale gouges could be attributed to the excellent mechanical interlocking effect between implants and biologically meshed bone tissues. Attributed to the -OH functional group which promotes synostosis between the biologically meshed bone and the gouge surface of implant, the multi-step MAO process could be an effective strategy to improve the osseointegration of Ti implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A multi-stage noise adaptive switching filter for extremely corrupted images

    Science.gov (United States)

    Dinh, Hai; Adhami, Reza; Wang, Yi

    2015-07-01

    A multi-stage noise adaptive switching filter (MSNASF) is proposed for the restoration of images extremely corrupted by impulse and impulse-like noise. The filter consists of two steps: noise detection and noise removal. The proposed extrema-based noise detection scheme utilizes the false contouring effect to get better over detection rate at low noise density. It is adaptive and will detect not only impulse but also impulse-like noise. In the noise removal step, a novel multi-stage filtering scheme is proposed. It replaces corrupted pixel with the nearest uncorrupted median to preserve details. When compared with other methods, MSNASF provides better peak signal to noise ratio (PSNR) and structure similarity index (SSIM). A subjective evaluation carried out online also demonstrates that MSNASF yields higher fidelity.

  6. Short-term Wind Forecasting at Wind Farms using WRF-LES and Actuator Disk Model

    Science.gov (United States)

    Kirkil, Gokhan

    2017-04-01

    Short-term wind forecasts are obtained for a wind farm on a mountainous terrain using WRF-LES. Multi-scale simulations are also performed using different PBL parameterizations. Turbines are parameterized using Actuator Disc Model. LES models improved the forecasts. Statistical error analysis is performed and ramp events are analyzed. Complex topography of the study area affects model performance, especially the accuracy of wind forecasts were poor for cross valley-mountain flows. By means of LES, we gain new knowledge about the sources of spatial and temporal variability of wind fluctuations such as the configuration of wind turbines.

  7. Robust Hydrological Forecasting for High-resolution Distributed Models Using a Unified Data Assimilation Approach

    Science.gov (United States)

    Hernandez, F.; Liang, X.

    2017-12-01

    Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational

  8. Multi-data reservoir history matching for enhanced reservoir forecasting and uncertainty quantification

    KAUST Repository

    Katterbauer, Klemens; Arango, Santiago; Sun, Shuyu; Hoteit, Ibrahim

    2015-01-01

    Reservoir simulations and history matching are critical for fine-tuning reservoir production strategies, improving understanding of the subsurface formation, and forecasting remaining reserves. Production data have long been incorporated

  9. Use of MLCM3 Software for Flash Flood Modeling and Forecasting

    OpenAIRE

    Inna Pivovarova; Daria Sokolova; Artur Batyrov; Vadim Kuzmin; Ngoc Anh Tran; DinhKha Dang; Kirill V. Shemanaev

    2018-01-01

    Accurate and timely flash floods forecasting, especially, in ungauged and poorly gauged basins, is one of the most important and challenging problems to be solved by the international hydrological community. In changing climate and variable anthropogenic impact on river basins, as well as due to low density of surface hydrometeorological network, flash flood forecasting based on “traditional” physically based, or conceptual, or statistical hydrological models often becomes inefficient. Unfort...

  10. Forecast Combinations

    OpenAIRE

    Timmermann, Allan G

    2005-01-01

    Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the ex-ante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this paper we analyse theoretically the factors that determine the advantages from combining forecasts (for example, the d...

  11. Forecaster Behaviour and Bias in Macroeconomic Forecasts

    OpenAIRE

    Roy Batchelor

    2007-01-01

    This paper documents the presence of systematic bias in the real GDP and inflation forecasts of private sector forecasters in the G7 economies in the years 1990–2005. The data come from the monthly Consensus Economics forecasting service, and bias is measured and tested for significance using parametric fixed effect panel regressions and nonparametric tests on accuracy ranks. We examine patterns across countries and forecasters to establish whether the bias reflects the inefficient use of i...

  12. Kinetic Alfven wave with density variation and loss-cone distribution function of multi-ions in PSBL region

    Science.gov (United States)

    Tamrakar, Radha; Varma, P.; Tiwari, M. S.

    2018-05-01

    Kinetic Alfven wave (KAW) generation due to variation of loss-cone index J and density of multi-ions (H+, He+ and O+) in the plasma sheet boundary layer region (PSBL) is investigated. Kinetic approach is used to derive dispersion relation of wave using Vlasov equation. Variation of frequency with respect to wide range of k⊥ρi (where k⊥ is wave vector across the magnetic field, ρi is gyroradius of ions and i denotes H+, He+ and O+ ions) is analyzed. It is found that each ion gyroradius and number density shows different effect on wave generation with varying width of loss-cone. KAW is generated with multi-ions (H+, He+ and O+) over wide regime for J=1 and shows dissimilar effect for J=2. Frequency is reduced with increasing density of gyrating He+ and O+ ions. Wave frequency is obtained within the reported range which strongly supports generation of kinetic Alfven waves. A sudden drop of frequency is also observed for H+ and He+ ion which may be due to heavy penetration of these ions through the loss-cone. The parameters of PSBL region are used for numerical calculation. The application of these results are in understanding the effect of gyrating multi-ions in transfer of energy and Poynting flux losses from PSBL region towards ionosphere and also describing the generation of aurora.

  13. Linear stochastic models for forecasting daily maxima and hourly concentrations of air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    McCollister, G M; Wilson, K R

    1975-04-01

    Two related time series models were developed to forecast concentrations of various air pollutants and tested on carbon monoxide and oxidant data for the Los Angeles basin. One model forecasts daily maximum concentrations of a particular pollutant using only past daily maximum values of that pollutant as input. The other model forecasts 1 hr average concentrations using only the past hourly average values. Both are significantly more accurate than persistence, i.e., forecasting for tomorrow what occurred today (or yesterday). Model forecasts for 1972 of the daily instantaneous maxima for total oxidant made using only past pollutant concentration data are more accurate than those made by the Los Angeles APCD using meteorological input as well as pollutant concentrations. Although none of these models forecast as accurately as might be desired for a health warning system, the relative success of simple time series models, even though based solely on pollutant concentration, suggests that models incorporating meteorological data and using either multi-dimensional times series or pattern recognition techniques should be tested.

  14. Modelling and Forecasting Cruise Tourism Demand to İzmir by Different Artificial Neural Network Architectures

    Directory of Open Access Journals (Sweden)

    Murat Cuhadar

    2014-03-01

    Full Text Available Abstract Cruise ports emerged as an important sector for the economy of Turkey bordered on three sides by water. Forecasting cruise tourism demand ensures better planning, efficient preparation at the destination and it is the basis for elaboration of future plans. In the recent years, new techniques such as; artificial neural networks were employed for developing of the predictive models to estimate tourism demand. In this study, it is aimed to determine the forecasting method that provides the best performance when compared the forecast accuracy of Multi-layer Perceptron (MLP, Radial Basis Function (RBF and Generalized Regression neural network (GRNN to estimate the monthly inbound cruise tourism demand to İzmir via the method giving best results. We used the total number of foreign cruise tourist arrivals as a measure of inbound cruise tourism demand and monthly cruise tourist arrivals to İzmir Cruise Port in the period of January 2005 ‐December 2013 were utilized to appropriate model. Experimental results showed that radial basis function (RBF neural network outperforms multi-layer perceptron (MLP and the generalised regression neural networks (GRNN in terms of forecasting accuracy. By the means of the obtained RBF neural network model, it has been forecasted the monthly inbound cruise tourism demand to İzmir for the year 2014.

  15. Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions

    Directory of Open Access Journals (Sweden)

    Stepchenko Arthur

    2015-12-01

    Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.

  16. Forecast combinations

    OpenAIRE

    Aiolfi, Marco; Capistrán, Carlos; Timmermann, Allan

    2010-01-01

    We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factor-augmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based fore...

  17. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  18. SOFT project: a new forecasting system based on satellite data

    Science.gov (United States)

    Pascual, Ananda; Orfila, A.; Alvarez, Alberto; Hernandez, E.; Gomis, D.; Barth, Alexander; Tintore, Joaquim

    2002-01-01

    The aim of the SOFT project is to develop a new ocean forecasting system by using a combination of satellite dat, evolutionary programming and numerical ocean models. To achieve this objective two steps are proved: (1) to obtain an accurate ocean forecasting system using genetic algorithms based on satellite data; and (2) to integrate the above new system into existing deterministic numerical models. Evolutionary programming will be employed to build 'intelligent' systems that, learning form the past ocean variability and considering the present ocean state, will be able to infer near future ocean conditions. Validation of the forecast skill will be carried out by comparing the forecasts fields with satellite and in situ observations. Validation with satellite observations will provide the expected errors in the forecasting system. Validation with in situ data will indicate the capabilities of the satellite based forecast information to improve the performance of the numerical ocean models. This later validation will be accomplished considering in situ measurements in a specific oceanographic area at two different periods of time. The first set of observations will be employed to feed the hybrid systems while the second set will be used to validate the hybrid and traditional numerical model results.

  19. Analytical total reaction cross-section calculations via Fermi-type functions. I. Fermi-step nuclear densities

    International Nuclear Information System (INIS)

    Abul-Magd, A.Y.; Talib aly al Hinai, M.

    2000-01-01

    In the framework of Glauber's multiple scattering theory we propose a closed form expression for the total nucleus-nucleus reaction cross-section. We adopt the Gaussian and the two-parameter Fermi step radial shapes to describe the nuclear density distributions of the projectile and the target, respectively. The present formula is used to study different systems over a wide energy range including low energy reactions, where the role of the Coulomb repulsion is taken into account. The present predictions reasonably reproduce experiment

  20. Evaluating sub-seasonal skill in probabilistic forecasts of Atmospheric Rivers and associated extreme events

    Science.gov (United States)

    Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.

    2017-12-01

    Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.

  1. Electrically tunable spin polarization in silicene: A multi-terminal spin density matrix approach

    International Nuclear Information System (INIS)

    Chen, Son-Hsien

    2016-01-01

    Recent realized silicene field-effect transistor yields promising electronic applications. Using a multi-terminal spin density matrix approach, this paper presents an analysis of the spin polarizations in a silicene structure of the spin field-effect transistor by considering the intertwined intrinsic and Rashba spin–orbit couplings, gate voltage, Zeeman splitting, as well as disorder. Coexistence of the stagger potential and intrinsic spin–orbit coupling results in spin precession, making any in-plane polarization directions reachable by the gate voltage; specifically, the intrinsic coupling allows one to electrically adjust the in-plane components of the polarizations, while the Rashba coupling to adjust the out-of-plan polarizations. Larger electrically tunable ranges of in-plan polarizations are found in oppositely gated silicene than in the uniformly gated silicene. Polarizations in different phases behave distinguishably in weak disorder regime, while independent of the phases, stronger disorder leads to a saturation value. - Highlights: • Density matrix with spin rotations enables multi-terminal arbitrary spin injections. • Gate-voltage tunable in-plane polarizations require intrinsic SO coupling. • Gate-voltage tunable out-of-plane polarizations require Rashba SO coupling. • Oppositely gated silicene yields a large tunable range of in-plan polarizations. • Polarizations in different phases behave distinguishably only in weak disorder.

  2. Spatio-temporal behaviour of medium-range ensemble forecasts

    Science.gov (United States)

    Kipling, Zak; Primo, Cristina; Charlton-Perez, Andrew

    2010-05-01

    Using the recently-developed mean-variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, we present an analysis of the spatio-temporal dynamics of their perturbations, and show how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. We also consider the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. We conclude by looking at how the MVL technique might assist in selecting models for inclusion in a multi-model ensemble, and suggest an experiment to test its potential in this context.

  3. Sub-Ensemble Coastal Flood Forecasting: A Case Study of Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Justin A. Schulte

    2017-12-01

    Full Text Available In this paper, it is proposed that coastal flood ensemble forecasts be partitioned into sub-ensemble forecasts using cluster analysis in order to produce representative statistics and to measure forecast uncertainty arising from the presence of clusters. After clustering the ensemble members, the ability to predict the cluster into which the observation will fall can be measured using a cluster skill score. Additional sub-ensemble and composite skill scores are proposed for assessing the forecast skill of a clustered ensemble forecast. A recently proposed method for statistically increasing the number of ensemble members is used to improve sub-ensemble probabilistic estimates. Through the application of the proposed methodology to Sandy coastal flood reforecasts, it is demonstrated that statistics computed using only ensemble members belonging to a specific cluster are more representative than those computed using all ensemble members simultaneously. A cluster skill-cluster uncertainty index relationship is identified, which is the cluster analog of the documented spread-skill relationship. Two sub-ensemble skill scores are shown to be positively correlated with cluster forecast skill, suggesting that skillfully forecasting the cluster into which the observation will fall is important to overall forecast skill. The identified relationships also suggest that the number of ensemble members within in each cluster can be used as guidance for assessing the potential for forecast error. The inevitable existence of ensemble member clusters in tidally dominated total water level prediction systems suggests that clustering is a necessary post-processing step for producing representative and skillful total water level forecasts.

  4. Multiscale seismicity analysis and forecasting: examples from the Western Pacific and Iceland

    International Nuclear Information System (INIS)

    Eberhard, A. J.

    2014-01-01

    updated after every earthquake M ≥ 3. Three different models (ETAS, STEP and TripleS) and an ensemble model consisting of three different models are used for this experiment. I found that while there is still room for improvement, the forecasts model are capable of producing usable forecasts even with the sparsely processed earthquake catalogue. The ensemble forecast has the highest information gain, but the difference to the best non-ensemble model is not significant. Chapter 5 is about the technical aspects of seismicity analysis and forecasting. I introduce in this a new software framework called MapSeis. MapSeis is an open-source software framework with focus on statistical seismology and seismicity forecasting and testing. It has been written by myself and is used in every single step in this thesis; because of the rather long development times and diverse tasks for which it has been used, it provides a large variety of handy tools and functions and should address the needs of many scientist working in this field. MapSeis has been built with expandability and maintainability in mind and features a PlugIn system and clearly defined application interfaces (API). My contribution to the three challenges will help to improve seismicity forecasting, but there is still room for improvement and especially the application of Operation Earthquake Forecasting needs to be discussed, not only by scientists but also by the public. (author)

  5. Multiscale seismicity analysis and forecasting: examples from the Western Pacific and Iceland

    Energy Technology Data Exchange (ETDEWEB)

    Eberhard, A. J.

    2014-07-01

    updated after every earthquake M ≥ 3. Three different models (ETAS, STEP and TripleS) and an ensemble model consisting of three different models are used for this experiment. I found that while there is still room for improvement, the forecasts model are capable of producing usable forecasts even with the sparsely processed earthquake catalogue. The ensemble forecast has the highest information gain, but the difference to the best non-ensemble model is not significant. Chapter 5 is about the technical aspects of seismicity analysis and forecasting. I introduce in this a new software framework called MapSeis. MapSeis is an open-source software framework with focus on statistical seismology and seismicity forecasting and testing. It has been written by myself and is used in every single step in this thesis; because of the rather long development times and diverse tasks for which it has been used, it provides a large variety of handy tools and functions and should address the needs of many scientist working in this field. MapSeis has been built with expandability and maintainability in mind and features a PlugIn system and clearly defined application interfaces (API). My contribution to the three challenges will help to improve seismicity forecasting, but there is still room for improvement and especially the application of Operation Earthquake Forecasting needs to be discussed, not only by scientists but also by the public. (author)

  6. New watershed-based climate forecast products for hydrologists and water managers

    Science.gov (United States)

    Baker, S. A.; Wood, A.; Rajagopalan, B.; Lehner, F.; Peng, P.; Ray, A. J.; Barsugli, J. J.; Werner, K.

    2017-12-01

    Operational sub-seasonal to seasonal (S2S) climate predictions have advanced in skill in recent years but are yet to be broadly utilized by stakeholders in the water management sector. While some of the challenges that relate to fundamental predictability are difficult or impossible to surmount, other hurdles related to forecast product formulation, translation, relevance, and accessibility can be directly addressed. These include products being misaligned with users' space-time needs, products disseminated in formats users cannot easily process, and products based on raw model outputs that are biased relative to user climatologies. In each of these areas, more can be done to bridge the gap by enhancing the usability, quality, and relevance of water-oriented predictions. In addition, water stakeholder impacts can benefit from short-range extremes predictions (such as 2-3 day storms or 1-week heat waves) at S2S time-scales, for which few products exist. We present interim results of a Research to Operations (R2O) effort sponsored by the NOAA MAPP Climate Testbed to (1) formulate climate prediction products so as to reduce hurdles to in water stakeholder adoption, and to (2) explore opportunities for extremes prediction at S2S time scales. The project is currently using CFSv2 and National Multi-­Model Ensemble (NMME) reforecasts and forecasts to develop real-time watershed-based climate forecast products, and to train post-processing approaches to enhance the skill and reliability of raw real-time S2S forecasts. Prototype S2S climate data products (forecasts and associated skill analyses) are now being operationally staged at NCAR on a public website to facilitate further product development through interactions with water managers. Initial demonstration products include CFSv2-based bi-weekly climate forecasts (weeks 1-2, 2-3, and 3-4) for sub-regional scale hydrologic units, and NMME-based monthly and seasonal prediction products. Raw model mean skill at these time

  7. Development and testing of improved statistical wind power forecasting methods.

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J. (Decision and Information Sciences); (INESC Porto)

    2011-12-06

    Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highly dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios

  8. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  9. Observations of Heliospheric Faraday Rotation (FR) and Interplanetary Scintillation (IPS) with the LOw Frequency ARray (LOFAR): Steps Towards Improving Space-Weather Forecasting Capabilities

    Science.gov (United States)

    Bisi, M. M.; Fallows, R. A.; Sobey, C.; Eftekhari, T.; Jensen, E. A.; Jackson, B. V.; Yu, H. S.; Hick, P. P.; Odstrcil, D.; Tokumaru, M.

    2015-12-01

    The phenomenon of space weather - analogous to terrestrial weather which describes the changing pressure, temperature, wind, and humidity conditions on Earth - is essentially a description of the changes in velocity, density, magnetic field, high-energy particles, and radiation in the near-Earth space environment including the effects of such changes on the Earth's magnetosphere, radiation belts, ionosphere, and thermosphere. Space weather can be considered to have two main strands: (i) scientific research, and (ii) applications. The former is self-explanatory, but the latter covers operational aspects which includes its forecasting. Understanding and forecasting space weather in the near-Earth environment is vitally important to protecting our modern-day reliance (militarily and commercially) on satellites, global-communication and navigation networks, high-altitude air travel (radiation concerns particularly on polar routes), long-distance power/oil/gas lines and piping, and for any future human exploration of space to list but a few. Two ground-based radio-observing remote-sensing techniques that can aid our understanding and forecasting of heliospheric space weather are those of interplanetary scintillation (IPS) and heliospheric Faraday rotation (FR). The LOw Frequency ARray (LOFAR) is a next-generation 'software' radio telescope centered in The Netherlands with international stations spread across central and northwest Europe. For several years, scientific observations of IPS on LOFAR have been undertaken on a campaign basis and the experiment is now well developed. More recently, LOFAR has been used to attempt scientific heliospheric FR observations aimed at remotely sensing the magnetic field of the plasma traversing the inner heliosphere. We present our latest progress using these two radio heliospheric-imaging remote-sensing techniques including the use of three-dimensional (3-D) modeling and reconstruction techniques using other, additional data as input

  10. Combining Unsupervised and Supervised Statistical Learning Methods for Currency Exchange Rate Forecasting

    OpenAIRE

    Vasiljeva, Polina

    2016-01-01

    In this thesis we revisit the challenging problem of forecasting currency exchange rate. We combine machine learning methods such as agglomerative hierarchical clustering and random forest to construct a two-step approach for predicting movements in currency exchange prices of the Swedish krona and the US dollar. We use a data set with over 200 predictors comprised of different financial and macro-economic time series and their transformations. We perform forecasting for one week ahead with d...

  11. Ensemble Assimilation Using Three First-Principles Thermospheric Models as a Tool for 72-hour Density and Satellite Drag Forecasts

    Science.gov (United States)

    Hunton, D.; Pilinski, M.; Crowley, G.; Azeem, I.; Fuller-Rowell, T. J.; Matsuo, T.; Fedrizzi, M.; Solomon, S. C.; Qian, L.; Thayer, J. P.; Codrescu, M.

    2014-12-01

    Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by variability in the density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. As the population of satellites in Earth orbit grows, higher space-weather prediction accuracy is required for critical missions, such as accurate catalog maintenance, collision avoidance for manned and unmanned space flight, reentry prediction, satellite lifetime prediction, defining on-board fuel requirements, and satellite attitude dynamics. We describe ongoing work to build a comprehensive nowcast and forecast system for neutral density, winds, temperature, composition, and satellite drag. This modeling tool will be called the Atmospheric Density Assimilation Model (ADAM). It will be based on three state-of-the-art coupled models of the thermosphere-ionosphere running in real-time, using assimilative techniques to produce a thermospheric nowcast. It will also produce, in realtime, 72-hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition. We will review the requirements for the ADAM system, the underlying full-physics models, the plethora of input options available to drive the models, a feasibility study showing the performance of first-principles models as it pertains to satellite-drag operational needs, and review challenges in designing an assimilative space-weather prediction model. The performance of the ensemble assimilative model is expected to exceed the performance of current empirical and assimilative density models.

  12. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    Science.gov (United States)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  13. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  14. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio; Lozano, Jose A.; Iñ za, Iñ aki; Irigoien, Xabier; Pé rez, Aritz; Rodrí guez, Juan Diego

    2013-01-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models

  15. A multi-wavelength, high-contrast contact radiography system for the study of low-density aerogel foams

    Energy Technology Data Exchange (ETDEWEB)

    Opachich, Y. P., E-mail: opachiyp@nv.doe.gov; Koch, J. A.; Haugh, M. J.; Romano, E.; Lee, J. J.; Huffman, E.; Weber, F. A. [National Security Technologies, LLC, Livermore, California 94550 (United States); Bowers, J. W. [National Security Technologies, LLC, Livermore, California 94550 (United States); University of California at Berkeley, Berkeley, California 94720 (United States); Benedetti, L. R.; Wilson, M.; Prisbrey, S. T.; Wehrenberg, C. E.; Baumann, T. F.; Lenhardt, J. M.; Cook, A.; Arsenlis, A.; Park, H.-S.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)

    2016-07-15

    A multi-wavelength, high contrast contact radiography system has been developed to characterize density variations in ultra-low density aerogel foams. These foams are used to generate a ramped pressure drive in materials strength experiments at the National Ignition Facility and require precision characterization in order to reduce errors in measurements. The system was used to characterize density variations in carbon and silicon based aerogels to ∼10.3% accuracy with ∼30 μm spatial resolution. The system description, performance, and measurement results collected using a 17.8 mg/cc carbon based JX–6 (C{sub 20}H{sub 30}) aerogel are discussed in this manuscript.

  16. Forecasting Cryptocurrencies Financial Time Series

    OpenAIRE

    Catania, Leopoldo; Grassi, Stefano; Ravazzolo, Francesco

    2018-01-01

    This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely on Dynamic Model Averaging to combine a large set of univariate Dynamic Linear Models and several multivariate Vector Autoregressive models with different forms of time variation. We find statistical si...

  17. Multi-step processes in the (d, t) and (d, {sup 3}He) reactions on {sup 116}Sn and {sup 208}Pb targets at E{sub d} = 200 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Langevin-Joliot, H.; Van de Wiele, J.; Guillot, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Koning, A.J. [Nuclear Research and Consultancy Group NRG, NL (Netherlands)

    2000-07-01

    The role of multi-step processes in the reactions {sup 116}Sn(d,t), {sup 208}Pb(d,t) and {sup 116}Sn(d,{sup 3}He), previously studied at E{sub d} = 200 MeV at forward angles and for relatively low energy transfers, has been investigated. We have performed for the first time multi-step calculations taking into account systematically collective excitations in the second and higher order step inelastic transitions. A calculation code based on the Feshbach, Kerman and Koonin model has been modified to handle explicitly these collective excitations, most important in the forward angle domain. One step double differential pick-up cross sections were built from finite range distorted wave results spread in energy using known or estimated hole state characteristics. It is shown that two-step cross sections calculated using the above method compare rather well with those deduced via coupled channel calculations for the same collective excitations. The multi-step calculations performed up to 6 steps reproduce reasonably well the {sup 115}Sn, {sup 207}Pb and {sup 115}In experimental spectra measured up to E{sub x}{approx}- 40 MeV and 15 deg. The relative contributions of steps of increasing order to pick-up cross sections at E{sub d} = 200 MeV and 150 MeV are discussed. (authors)

  18. A systematic multi-step screening of numerous salt hydrates for low temperature thermochemical energy storage

    International Nuclear Information System (INIS)

    N’Tsoukpoe, Kokouvi Edem; Schmidt, Thomas; Rammelberg, Holger Urs; Watts, Beatriz Amanda; Ruck, Wolfgang K.L.

    2014-01-01

    Highlights: • We report an evaluation of the potential of salt hydrates for thermochemical storage. • Both theoretical calculations and experimental measurements using TGA/DSC are used. • Salt hydrates offer very low potential for thermochemical heat storage. • The efficiency of classical processes using salt hydrates is very low: typically 25%. • New processes are needed for the use of salt hydrates in thermochemical heat storage. - Abstract: In this paper, the potential energy storage density and the storage efficiency of salt hydrates as thermochemical storage materials for the storage of heat generated by a micro-combined heat and power (micro-CHP) have been assessed. Because salt hydrates used in various thermochemical heat storage processes fail to meet the expectations, a systematic evaluation of the suitability of 125 salt hydrates has been performed in a three-step approach. In the first step general issues such as toxicity and risk of explosion have been considered. In the second and third steps, the authors implement a combined approach consisting of theoretical calculations and experimental measurements using Thermogravimetric Analysis (TGA). Thus, application-oriented comparison criteria, among which the net energy storage density of the material and the thermal efficiency, have been used to evaluate the potential of 45 preselected salt hydrates for a low temperature thermochemical heat storage application. For an application that requires a discharging temperature above 60 °C, SrBr 2 ·6H 2 O and LaCl 3 ·7H 2 O appear to be the most promising, only from thermodynamic point of view. However, the maximum net energy storage density including the water in the water storage tank that they offer (respectively 133 kW h m −3 and 89 kW h m −3 ) for a classical thermochemical heat storage process are not attractive for the intended application. Furthermore, the thermal efficiency that would result from the storage process based on salt hydrates

  19. Statistical Correction of Air Temperature Forecasts for City and Road Weather Applications

    Science.gov (United States)

    Mahura, Alexander; Petersen, Claus; Sass, Bent; Gilet, Nicolas

    2014-05-01

    The method for statistical correction of air /road surface temperatures forecasts was developed based on analysis of long-term time-series of meteorological observations and forecasts (from HIgh Resolution Limited Area Model & Road Conditions Model; 3 km horizontal resolution). It has been tested for May-Aug 2012 & Oct 2012 - Mar 2013, respectively. The developed method is based mostly on forecasted meteorological parameters with a minimal inclusion of observations (covering only a pre-history period). Although the st iteration correction is based taking into account relevant temperature observations, but the further adjustment of air and road temperature forecasts is based purely on forecasted meteorological parameters. The method is model independent, e.g. it can be applied for temperature correction with other types of models having different horizontal resolutions. It is relatively fast due to application of the singular value decomposition method for matrix solution to find coefficients. Moreover, there is always a possibility for additional improvement due to extra tuning of the temperature forecasts for some locations (stations), and in particular, where for example, the MAEs are generally higher compared with others (see Gilet et al., 2014). For the city weather applications, new operationalized procedure for statistical correction of the air temperature forecasts has been elaborated and implemented for the HIRLAM-SKA model runs at 00, 06, 12, and 18 UTCs covering forecast lengths up to 48 hours. The procedure includes segments for extraction of observations and forecast data, assigning these to forecast lengths, statistical correction of temperature, one-&multi-days statistical evaluation of model performance, decision-making on using corrections by stations, interpolation, visualisation and storage/backup. Pre-operational air temperature correction runs were performed for the mainland Denmark since mid-April 2013 and shown good results. Tests also showed

  20. The Wind Forecast Improvement Project (WFIP): A Public/Private Partnership for Improving Short Term Wind Energy Forecasts and Quantifying the Benefits of Utility Operations. The Southern Study Area, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Jeffrey M. [AWS Truepower, LLC, Albany, NY (United States); Manobianco, John [MESO, Inc., Troy, NY (United States); Schroeder, John [Texas Tech Univ., Lubbock, TX (United States). National Wind Inst.; Ancell, Brian [Texas Tech Univ., Lubbock, TX (United States). Atmospheric Science Group; Brewster, Keith [Univ. of Oklahoma, Norman, OK (United States). Center for Analysis and Prediction of Storms; Basu, Sukanta [North Carolina State Univ., Raleigh, NC (United States). Dept. of Marine, Earth, and Atmospheric Sciences; Banunarayanan, Venkat [ICF International (United States); Hodge, Bri-Mathias [National Renewable Energy Lab. (NREL), Golden, CO (United States); Flores, Isabel [Electricity Reliability Council of Texas (United States)

    2014-04-30

    This Final Report presents a comprehensive description, findings, and conclusions for the Wind Forecast Improvement Project (WFIP) -- Southern Study Area (SSA) work led by AWS Truepower (AWST). This multi-year effort, sponsored by the Department of Energy (DOE) and National Oceanographic and Atmospheric Administration (NOAA), focused on improving short-term (15-minute - 6 hour) wind power production forecasts through the deployment of an enhanced observation network of surface and remote sensing instrumentation and the use of a state-of-the-art forecast modeling system. Key findings from the SSA modeling and forecast effort include: 1. The AWST WFIP modeling system produced an overall 10 - 20% improvement in wind power production forecasts over the existing Baseline system, especially during the first three forecast hours; 2. Improvements in ramp forecast skill, particularly for larger up and down ramps; 3. The AWST WFIP data denial experiments showed mixed results in the forecasts incorporating the experimental network instrumentation; however, ramp forecasts showed significant benefit from the additional observations, indicating that the enhanced observations were key to the model systems’ ability to capture phenomena responsible for producing large short-term excursions in power production; 4. The OU CAPS ARPS simulations showed that the additional WFIP instrument data had a small impact on their 3-km forecasts that lasted for the first 5-6 hours, and increasing the vertical model resolution in the boundary layer had a greater impact, also in the first 5 hours; and 5. The TTU simulations were inconclusive as to which assimilation scheme (3DVAR versus EnKF) provided better forecasts, and the additional observations resulted in some improvement to the forecasts in the first 1 - 3 hours.

  1. Numerical simulation of machining distortions on a forged aerospace component following a one and a multi-step approaches

    Science.gov (United States)

    Prete, Antonio Del; Franchi, Rodolfo; Antermite, Fabrizio; Donatiello, Iolanda

    2018-05-01

    Residual stresses appear in a component as a consequence of thermo-mechanical processes (e.g. ring rolling process) casting and heat treatments. When machining these kinds of components, distortions arise due to the redistribution of residual stresses due to the foregoing process history inside the material. If distortions are excessive, they can lead to a large number of scrap parts. Since dimensional accuracy can affect directly the engines efficiency, the dimensional control for aerospace components is a non-trivial issue. In this paper, the problem related to the distortions of large thin walled aeroengines components in nickel superalloys has been addressed. In order to estimate distortions on inner diameters after internal turning operations, a 3D Finite Element Method (FEM) analysis has been developed on a real industrial test case. All the process history, has been taken into account by developing FEM models of ring rolling process and heat treatments. Three different strategies of ring rolling process have been studied and the combination of related parameters which allows to obtain the best dimensional accuracy has been found. Furthermore, grain size evolution and recrystallization phenomena during manufacturing process has been numerically investigated using a semi empirical Johnson-Mehl-Avrami-Kohnogorov (JMAK) model. The volume subtractions have been simulated by boolean trimming: a one step and a multi step analysis have been performed. The multi-step procedure has allowed to choose the best material removal sequence in order to reduce machining distortions.

  2. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    Science.gov (United States)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  3. Fuel cycle forecasting - there are forecasts and there are forecasts

    International Nuclear Information System (INIS)

    Puechl, K.H.

    1975-01-01

    The FORECAST-NUCLEAR computer program described recognizes that forecasts are made to answer a variety of questions and, therefore, that no single forecast is universally appropriate. Also, it recognizes that no two individuals will completely agree as to the input data that are appropriate for obtaining an answer to even a single simple question. Accordingly, the program was written from a utilitarian standpoint: it allows working with multiple projections; data inputting is simple to allow game-playing; computation time is short to minimize the cost of 'what if' assessements; and detail is internally carried to allow meaningful analysis. (author)

  4. Fuel cycle forecasting - there are forecasts and there are forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Puechl, K H

    1975-12-01

    The FORECAST-NUCLEAR computer program described recognizes that forecasts are made to answer a variety of questions and, therefore, that no single forecast is universally appropriate. Also, it recognizes that no two individuals will completely agree as to the input data that are appropriate for obtaining an answer to even a single simple question. Accordingly, the program was written from a utilitarian standpoint: it allows working with multiple projections; data inputting is simple to allow game-playing; computation time is short to minimize the cost of 'what if' assessements; and detail is internally carried to allow meaningful analysis.

  5. Development of a WRF-RTFDDA-based high-resolution hybrid data-assimilation and forecasting system toward to operation in the Middle East

    Science.gov (United States)

    Liu, Y.; Wu, W.; Zhang, Y.; Kucera, P. A.; Liu, Y.; Pan, L.

    2012-12-01

    Weather forecasting in the Middle East is challenging because of its complicated geographical nature including massive coastal area and heterogeneous land, and regional spare observational network. Strong air-land-sea interactions form multi-scale weather regimes in the area, which require a numerical weather prediction model capable of properly representing multi-scale atmospheric flow with appropriate initial conditions. The WRF-based Real-Time Four Dimensional Data Assimilation (RTFDDA) system is one of advanced multi-scale weather analysis and forecasting facilities developed at the Research Applications Laboratory (RAL) of NCAR. The forecasting system is applied for the Middle East with careful configuration. To overcome the limitation of the very sparsely available conventional observations in the region, we develop a hybrid data assimilation algorithm combining RTFDDA and WRF-3DVAR, which ingests remote sensing data from satellites and radar. This hybrid data assimilation blends Newtonian nudging FDDA and 3DVAR technology to effectively assimilate both conventional observations and remote sensing measurements and provide improved initial conditions for the forecasting system. For brevity, the forecasting system is called RTF3H (RTFDDA-3DVAR Hybrid). In this presentation, we will discuss the hybrid data assimilation algorithm, and its implementation, and the applications for high-impact weather events in the area. Sensitivity studies are conducted to understand the strength and limitations of this hybrid data assimilation algorithm.

  6. Pronosticando la inflación mensual en Colombia un paso hacia delante: una aproximación "de abajo hacia arriba" || Forecasting the Colombian Monthly Inflation One Step Ahead: A "Bottom to Top" Approach

    Directory of Open Access Journals (Sweden)

    Alonso, Julio César

    2017-06-01

    Full Text Available La estructura jerárquica del Índice de Precios al Consumidor (IPC de Colombia permite calcular la inflación como una combinación lineal de sus subcomponentes. Nuestra aproximación implica emplear modelos SARIMA para pronosticar cada componente del IPC y crear un pronóstico de la inflación como una combinación lineal de los pronósticos individuales; es decir, una aproximación "de abajo hacia arriba". Se evalúa el desempeño fuera de muestra de los pronósticos para el siguiente mes de 12 métodos que emplean una aproximación "de abajo hacia arriba". Estos métodos son comparados con un pronóstico agregado de la inflación empleando un modelo SARIMA para el IPC total. Nuestros resultados muestran que emplear un método "de abajo hacia arriba" para pronosticar la inflación del siguiente mes tiene un mejor comportamiento que emplear un modelo SARIMA agregado. || The hierarchical structure of the Colombian Consumer Price Index (CPI makes possible to calculate inflation as a linear combination of its subcomponents. We use SARIMA models to forecast each component of CPI and construct an forecast of inflation using a lineal combination of the forecasts of these components, i.e. a "bottom to top" approach. In this paper, we asses the out-of-sample performance of the one-step ahead forecast of 12 "bottom to top" methodologies. These methods are compared with an aggregate forecast using a SARIMA model. Our results show that a "bottom to top" method to forecast inflation outperforms an aggregate approach for the case of monthly inflation in Colombia.

  7. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  8. Modelling and Forecasting Stock Price Movements with Serially Dependent Determinants

    Directory of Open Access Journals (Sweden)

    Rasika Yatigammana

    2018-05-01

    Full Text Available The direction of price movements are analysed under an ordered probit framework, recognising the importance of accounting for discreteness in price changes. By extending the work of Hausman et al. (1972 and Yang and Parwada (2012,This paper focuses on improving the forecast performance of the model while infusing a more practical perspective by enhancing flexibility. This is achieved by extending the existing framework to generate short term multi period ahead forecasts for better decision making, whilst considering the serial dependence structure. This approach enhances the flexibility and adaptability of the model to future price changes, particularly targeting risk minimisation. Empirical evidence is provided, based on seven stocks listed on the Australian Securities Exchange (ASX. The prediction success varies between 78 and 91 per cent for in-sample and out-of-sample forecasts for both the short term and long term.

  9. Forecasting return products in an integrated forward/reverse supply chain utilizing an ANFIS

    Directory of Open Access Journals (Sweden)

    Kumar D. Thresh

    2014-09-01

    Full Text Available Interests in Closed-Loop Supply Chain (CLSC issues are growing day by day within the academia, companies, and customers. Many papers discuss profitability or cost reduction impacts of remanufacturing, but a very important point is almost missing. Indeed, there is no guarantee about the amounts of return products even if we know a lot about demands of first products. This uncertainty is due to reasons such as companies’ capabilities in collecting End-of-Life (EOL products, customers’ interests in returning (and current incentives, and other independent collectors. The aim of this paper is to deal with the important gap of the uncertainties of return products. Therefore, we discuss the forecasting method of return products which have their own open-loop supply chain. We develop an integrated two-phase methodology to cope with the closed-loop supply chain design and planning problem. In the first phase, an Adaptive Network Based Fuzzy Inference System (ANFIS is presented to handle the uncertainties of the amounts of return product and to determine the forecasted return rates. In the second phase, and based on the results of the first one, the proposed multi-echelon, multi-product, multi-period, closed-loop supply chain network is optimized. The second-phase optimization is undertaken based on using general exact solvers in order to achieve the global optimum. Finally, the performance of the proposed forecasting method is evaluated in 25 periods using a numerical example, which contains a pattern in the returning of products. The results reveal acceptable performance of the proposed two-phase optimization method. Based on them, such forecasting approaches can be applied to real-case CLSC problems in order to achieve more reliable design and planning of the network

  10. Short-term electricity demand and gas price forecasts using wavelet transforms and adaptive models

    International Nuclear Information System (INIS)

    Nguyen, Hang T.; Nabney, Ian T.

    2010-01-01

    This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their NMSEs are 0.02314 and 0.15384 respectively. (author)

  11. Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast

    Directory of Open Access Journals (Sweden)

    Jinyin Ye

    2016-01-01

    Full Text Available TIGGE (THORPEX International Grand Global Ensemble was a major part of the THORPEX (Observing System Research and Predictability Experiment. It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability forecast, extend the lead time of the flood forecast, and gain more time for decision-makers to make the right decision. In this study, precipitation ensemble forecast products from ECMWF, NCEP, and CMA are used to drive distributed hydrologic model TOPX. We focus on Yi River catchment and aim to build a flood forecast and early warning system. The results show that the meteorologic-hydrologic coupled model can satisfactorily predict the flow-process of four flood events. The predicted occurrence time of peak discharges is close to the observations. However, the magnitude of the peak discharges is significantly different due to various performances of the ensemble prediction systems. The coupled forecasting model can accurately predict occurrence of the peak time and the corresponding risk probability of peak discharge based on the probability distribution of peak time and flood warning, which can provide users a strong theoretical foundation and valuable information as a promising new approach.

  12. Climatological attribution of wind power ramp events in East Japan and their probabilistic forecast based on multi-model ensembles downscaled by analog ensemble using self-organizing maps

    Science.gov (United States)

    Ohba, Masamichi; Nohara, Daisuke; Kadokura, Shinji

    2016-04-01

    Severe storms or other extreme weather events can interrupt the spin of wind turbines in large scale that cause unexpected "wind ramp events". In this study, we present an application of self-organizing maps (SOMs) for climatological attribution of the wind ramp events and their probabilistic prediction. The SOM is an automatic data-mining clustering technique, which allows us to summarize a high-dimensional data space in terms of a set of reference vectors. The SOM is applied to analyze and connect the relationship between atmospheric patterns over Japan and wind power generation. SOM is employed on sea level pressure derived from the JRA55 reanalysis over the target area (Tohoku region in Japan), whereby a two-dimensional lattice of weather patterns (WPs) classified during the 1977-2013 period is obtained. To compare with the atmospheric data, the long-term wind power generation is reconstructed by using a high-resolution surface observation network AMeDAS (Automated Meteorological Data Acquisition System) in Japan. Our analysis extracts seven typical WPs, which are linked to frequent occurrences of wind ramp events. Probabilistic forecasts to wind power generation and ramps are conducted by using the obtained SOM. The probability are derived from the multiple SOM lattices based on the matching of output from TIGGE multi-model global forecast to the WPs on the lattices. Since this method effectively takes care of the empirical uncertainties from the historical data, wind power generation and ramp is probabilistically forecasted from the forecasts of global models. The predictability skill of the forecasts for the wind power generation and ramp events show the relatively good skill score under the downscaling technique. It is expected that the results of this study provides better guidance to the user community and contribute to future development of system operation model for the transmission grid operator.

  13. A Research on Development of The Multi-mode Flood Forecasting System Version Management

    Science.gov (United States)

    Shen, J.-C.; Chang, C. H.; Lien, H. C.; Wu, S. J.; Horng, M. J.

    2009-04-01

    With the global economy and technological development, the degree of urbanization and population density relative to raise. At the same time, a natural buffer space and resources year after year, the situation has been weakened, not only lead to potential environmental disasters, more and more serious, disaster caused by the economy, loss of natural environment at all levels has been expanded. In view of this, the active participation of all countries in the world cross-sectoral integration of disaster prevention technology research and development, in addition, the specialized field of disaster prevention technology, science and technology development, network integration technology, high-speed data transmission and information to support the establishment of mechanisms for disaster management The decision-making and cross-border global disaster information network building and other related technologies, has become the international anti-disaster science and technology development trends, this trend. Naturally a few years in Taiwan, people's lives and property losses caused by many problems related to natural disaster prevention and disaster prevention and the establishment of applications has become a very important. For FEWS_Taiwan, flood warning system developed by the Delft Hydraulics and introduced the Water Resources Agency (WRA), it provides those functionalities for users to modify contents to add the basins, regions, data sources, models and etc. Despite this advantage, version differences due to different users or different teams yet bring about the difficulties on synchronization and integration.At the same time in different research teams will also add different modes of meteorological and hydrological data. From the government perspective of WRA, the need to plan standard operation procedures for system integration demands that the effort for version control due to version differences must be cost down or yet canceled out. As for FEWS_Taiwan, this

  14. Seasonal maximum temperature prediction skill over Southern Africa: 1- vs 2-tiered forecasting systems

    CSIR Research Space (South Africa)

    Lazenby, MJ

    2011-09-01

    Full Text Available TEMPERATURE PREDICTION SKILL OVER SOUTHERN AFRICA: 1- VS. 2-TIERED FORECASTING SYSTEMS Melissa J. Lazenby University of Pretoria, Private Bag X20, Pretoria, 0028, South Africa Willem A. Landman Council for Scientific and Industrial....J., Tyson, P.D. and Tennant, W.J., 2001. Retro-active skill of multi- tiered forecasts of summer rainfall over southern Africa. International Journal of Climatology, 21, 1- 19. Mason, S.J. and Graham, N.E., 2002. Areas beneath the relative operating...

  15. Impact of GFZ's Effective Angular Momentum Forecasts on Polar Motion Prediction

    Science.gov (United States)

    Dill, Robert; Dobslaw, Henryk

    2017-04-01

    The Earth System Modelling group at GeoForschungsZentrum (GFZ) Potsdam offers now 6-day forecasts of Earth rotation excitation due to atmospheric, oceanic, and hydrologic angular momentum changes that are consistent with its 40 years-long EAM series. Those EAM forecasts are characterized by an improved long-term consistency due to the introduction of a time-invariant high-resolution reference topography into the AAM processing that accounts for occasional NWP model changes. In addition, all tidal signals from both atmosphere and ocean have been separated, and the temporal resolution of both AAM and OAM has been increased to 3 hours. Analysis of an extended set of EAM short-term hindcasts revealed positive prediction skills for up to 6 days into the future when compared to a persistent forecast. Whereas UT1 predictions in particular rely on an accurate AAM forecast, skillfull polar motion prediction requires high-quality OAM forecasts as well. We will present in this contribution the results from a multi-year hindcast experiment, demonstrating that the polar motion prediction as currently available from Bulletin A can be improved in particular for lead-times between 2 and 5 days by incorporating OAM forecasts. We will also report about early results obtained at Observatoire de Paris to predict polar motion from the integration of GFZ's 6-day EAM forecasts into the Liouville equation in a routine setting, that fully takes into account the operational latencies of all required input products.

  16. An overview of wind power forecast types and their use in large-scale integration of wind power

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov [ENFOR A/S, Horslholm (Denmark); Madsen, Henrik [Technical Univ. of Denmark, Lyngby (Denmark). Informatics and Mathematical Modelling

    2011-07-01

    Wind power forecast characteristics are described and it is shown how analyses of actual decision problems can be used to derive the forecast characteristics important in a given situation. Generally, characteristics related to resolution in space and time, together with the required maximal forecast horizon are easily identified. However, identification of forecast characteristics required for optimal decision support requires a more thorough investigation, which is illustrated by examples. Generally, quantile forecasts of the future wind power production are required, but the transformation of a quantile forecast into an actual decisions is highly dependent on the precise formulation of the decision problem. Furthermore, when consequences of neighbouring time steps interact, quantile forecasts are not sufficient. It is argued that a general solution in such cases is to base the decision on reliable scenarios of the future wind power production. (orig.)

  17. Wind power forecast using wavelet neural network trained by improved Clonal selection algorithm

    International Nuclear Information System (INIS)

    Chitsaz, Hamed; Amjady, Nima; Zareipour, Hamidreza

    2015-01-01

    Highlights: • Presenting a Morlet wavelet neural network for wind power forecasting. • Proposing improved Clonal selection algorithm for training the model. • Applying Maximum Correntropy Criterion to evaluate the training performance. • Extensive testing of the proposed wind power forecast method on real-world data. - Abstract: With the integration of wind farms into electric power grids, an accurate wind power prediction is becoming increasingly important for the operation of these power plants. In this paper, a new forecasting engine for wind power prediction is proposed. The proposed engine has the structure of Wavelet Neural Network (WNN) with the activation functions of the hidden neurons constructed based on multi-dimensional Morlet wavelets. This forecast engine is trained by a new improved Clonal selection algorithm, which optimizes the free parameters of the WNN for wind power prediction. Furthermore, Maximum Correntropy Criterion (MCC) has been utilized instead of Mean Squared Error as the error measure in training phase of the forecasting model. The proposed wind power forecaster is tested with real-world hourly data of system level wind power generation in Alberta, Canada. In order to demonstrate the efficiency of the proposed method, it is compared with several other wind power forecast techniques. The obtained results confirm the validity of the developed approach

  18. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  19. Forecasting Housing Approvals in Australia: Do Forecasters Herd?

    DEFF Research Database (Denmark)

    Stadtmann, Georg; Pierdzioch; Rülke

    2012-01-01

    Price trends in housing markets may reflect herding of market participants. A natural question is whether such herding, to the extent that it occurred, reflects herding in forecasts of professional forecasters. Using more than 6,000 forecasts of housing approvals for Australia, we did not find...

  20. Gold sales forecasting: The Box-Jenkins methodology

    Directory of Open Access Journals (Sweden)

    Johannes Tshepiso Tsoku

    2017-02-01

    Full Text Available The study employs the Box-Jenkins Methodology to forecast South African gold sales. For a resource economy like South Africa where metals and minerals account for a high proportion of GDP and export earnings, the decline in gold sales is very disturbing. Box-Jenkins time series technique was used to perform time series analysis of monthly gold sales for the period January 2000 to June 2013 with the following steps: model identification, model estimation, diagnostic checking and forecasting. Furthermore, the prediction accuracy is tested using mean absolute percentage error (MAPE. From the analysis, a seasonal ARIMA(4,1,4×(0,1,112 was found to be the “best fit model” with an MAPE value of 11% indicating that the model is fit to be used to predict or forecast future gold sales for South Africa. In addition, the forecast values show that there will be a decrease in the overall gold sales for the first six months of 2014. It is hoped that the study will help the public and private sectors to understand the gold sales or output scenario and later plan the gold mining activities in South Africa. Furthermore, it is hoped that this research paper has demonstrated the significance of Box-Jenkins technique for this area of research and that they will be applied in the future.

  1. Adaptively smoothed seismicity earthquake forecasts for Italy

    Directory of Open Access Journals (Sweden)

    Yan Y. Kagan

    2010-11-01

    Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.

  2. Study of hourly and daily solar irradiation forecast using diagonal recurrent wavelet neural networks

    International Nuclear Information System (INIS)

    Cao Jiacong; Lin Xingchun

    2008-01-01

    An accurate forecast of solar irradiation is required for various solar energy applications and environmental impact analyses in recent years. Comparatively, various irradiation forecast models based on artificial neural networks (ANN) perform much better in accuracy than many conventional prediction models. However, the forecast precision of most existing ANN based forecast models has not been satisfactory to researchers and engineers so far, and the generalization capability of these networks needs further improving. Combining the prominent dynamic properties of a recurrent neural network (RNN) with the enhanced ability of a wavelet neural network (WNN) in mapping nonlinear functions, a diagonal recurrent wavelet neural network (DRWNN) is newly established in this paper to perform fine forecasting of hourly and daily global solar irradiance. Some additional steps, e.g. applying historical information of cloud cover to sample data sets and the cloud cover from the weather forecast to network input, are adopted to help enhance the forecast precision. Besides, a specially scheduled two phase training algorithm is adopted. As examples, both hourly and daily irradiance forecasts are completed using sample data sets in Shanghai and Macau, and comparisons between irradiation models show that the DRWNN models are definitely more accurate

  3. Seasonal forecasting of discharge for the Raccoon River, Iowa

    Science.gov (United States)

    Slater, Louise; Villarini, Gabriele; Bradley, Allen; Vecchi, Gabriel

    2016-04-01

    The state of Iowa (central United States) is regularly afflicted by severe natural hazards such as the 2008/2013 floods and the 2012 drought. To improve preparedness for these catastrophic events and allow Iowans to make more informed decisions about the most suitable water management strategies, we have developed a framework for medium to long range probabilistic seasonal streamflow forecasting for the Raccoon River at Van Meter, a 8900-km2 catchment located in central-western Iowa. Our flow forecasts use statistical models to predict seasonal discharge for low to high flows, with lead forecasting times ranging from one to ten months. Historical measurements of daily discharge are obtained from the U.S. Geological Survey (USGS) at the Van Meter stream gage, and used to compute quantile time series from minimum to maximum seasonal flow. The model is forced with basin-averaged total seasonal precipitation records from the PRISM Climate Group and annual row crop production acreage from the U.S. Department of Agriculture's National Agricultural Statistics Services database. For the forecasts, we use corn and soybean production from the previous year (persistence forecast) as a proxy for the impacts of agricultural practices on streamflow. The monthly precipitation forecasts are provided by eight Global Climate Models (GCMs) from the North American Multi-Model Ensemble (NMME), with lead times ranging from 0.5 to 11.5 months, and a resolution of 1 decimal degree. Additionally, precipitation from the month preceding each season is used to characterize antecedent soil moisture conditions. The accuracy of our modelled (1927-2015) and forecasted (2001-2015) discharge values is assessed by comparison with the observed USGS data. We explore the sensitivity of forecast skill over the full range of lead times, flow quantiles, forecast seasons, and with each GCM. Forecast skill is also examined using different formulations of the statistical models, as well as NMME forecast

  4. Forecasting freight flows

    DEFF Research Database (Denmark)

    Lyk-Jensen, Stéphanie

    2011-01-01

    Trade patterns and transport markets are changing as a result of the growth and globalization of international trade, and forecasting future freight flow has to rely on trade forecasts. Forecasting freight flows is critical for matching infrastructure supply to demand and for assessing investment...... constitute a valuable input to freight models for forecasting future capacity problems.......Trade patterns and transport markets are changing as a result of the growth and globalization of international trade, and forecasting future freight flow has to rely on trade forecasts. Forecasting freight flows is critical for matching infrastructure supply to demand and for assessing investment...

  5. Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.

    Science.gov (United States)

    Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra

    2014-04-01

    To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Probabilistic Electricity Price Forecasting Models by Aggregation of Competitive Predictors

    Directory of Open Access Journals (Sweden)

    Claudio Monteiro

    2018-04-01

    Full Text Available This article presents original probabilistic price forecasting meta-models (PPFMCP models, by aggregation of competitive predictors, for day-ahead hourly probabilistic price forecasting. The best twenty predictors of the EEM2016 EPF competition are used to create ensembles of hourly spot price forecasts. For each hour, the parameter values of the probability density function (PDF of a Beta distribution for the output variable (hourly price can be directly obtained from the expected and variance values associated to the ensemble for such hour, using three aggregation strategies of predictor forecasts corresponding to three PPFMCP models. A Reliability Indicator (RI and a Loss function Indicator (LI are also introduced to give a measure of uncertainty of probabilistic price forecasts. The three PPFMCP models were satisfactorily applied to the real-world case study of the Iberian Electricity Market (MIBEL. Results from PPFMCP models showed that PPFMCP model 2, which uses aggregation by weight values according to daily ranks of predictors, was the best probabilistic meta-model from a point of view of mean absolute errors, as well as of RI and LI. PPFMCP model 1, which uses the averaging of predictor forecasts, was the second best meta-model. PPFMCP models allow evaluations of risk decisions based on the price to be made.

  7. Hybrid Stochastic Forecasting Model for Management of Large Open Water Reservoir with Storage Function

    Science.gov (United States)

    Kozel, Tomas; Stary, Milos

    2017-12-01

    The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for

  8. Spatial forecast of landslides in three gorges based on spatial data mining.

    Science.gov (United States)

    Wang, Xianmin; Niu, Ruiqing

    2009-01-01

    The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc). China-Brazil Earth Resources Satellite (Cbers) images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County) in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods.

  9. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR

  10. Robust forecast comparison

    OpenAIRE

    Jin, Sainan; Corradi, Valentina; Swanson, Norman

    2015-01-01

    Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. This paper addresses this issue by using a novel criterion for forecast evaluation which is based on the entire distribution of forecast errors. We introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority, and we establish a ...

  11. Multi-Step Ka/Ka Dichroic Plate with Rounded Corners for NASA's 34m Beam Waveguide Antenna

    Science.gov (United States)

    Veruttipong, Watt; Khayatian, Behrouz; Hoppe, Daniel; Long, Ezra

    2013-01-01

    A multi-step Ka/Ka dichroic plate Frequency Selective Surface (FSS structure) is designed, manufactured and tested for use in NASA's Deep Space Network (DSN) 34m Beam Waveguide (BWG) antennas. The proposed design allows ease of manufacturing and ability to handle the increased transmit power (reflected off the FSS) of the DSN BWG antennas from 20kW to 100 kW. The dichroic is designed using HFSS and results agree well with measured data considering the manufacturing tolerances that could be achieved on the dichroic.

  12. High-spatial-resolution electron density measurement by Langmuir probe for multi-point observations using tiny spacecraft

    Science.gov (United States)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Trondsen, E.; Clausen, L. B. N.; Miloch, W. J.; Moen, J. I.

    2017-11-01

    A method for evaluating electron density using a single fixed-bias Langmuir probe is presented. The technique allows for high-spatio-temporal resolution electron density measurements, which can be effectively carried out by tiny spacecraft for multi-point observations in the ionosphere. The results are compared with the multi-needle Langmuir probe system, which is a scientific instrument developed at the University of Oslo comprising four fixed-bias cylindrical probes that allow small-scale plasma density structures to be characterized in the ionosphere. The technique proposed in this paper can comply with the requirements of future small-sized spacecraft, where the cost-effectiveness, limited space available on the craft, low power consumption and capacity for data-links need to be addressed. The first experimental results in both the plasma laboratory and space confirm the efficiency of the new approach. Moreover, detailed analyses on two challenging issues when deploying the DC Langmuir probe on a tiny spacecraft, which are the limited conductive area of the spacecraft and probe surface contamination, are presented in the paper. It is demonstrated that the limited conductive area, depending on applications, can either be of no concern for the experiment or can be resolved by mitigation methods. Surface contamination has a small impact on the performance of the developed probe.

  13. One-Step Generation of Multi-Qubit GHZ and W States in Superconducting Transmon Qubit System

    International Nuclear Information System (INIS)

    Gao Guilong; Huang Shousheng; Wang Mingfeng; Jiang Nianquan; Cai Genchang

    2012-01-01

    We propose a one-step method to prepare multi-qubit GHZ and W states with transmon qubits capacitively coupled to a superconducting transmission line resonator (TLR). Compared with the scheme firstly introduced by Wang et al. [Phys. Rev. B 81 (2010) 104524], our schemes have longer dephasing time and much shorter operation time because the transmon qubits we used are not only more robust to the decoherence and the unavoidable parameter variations, but also have much stronger coupling constant with TLR. Based on the favourable properties of transmons and TLR, our method is more feasible in experiment. (general)

  14. Bayesian modeling of the mass and density of asteroids

    Science.gov (United States)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  15. Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique

    Directory of Open Access Journals (Sweden)

    Jan D. Keller

    2008-12-01

    Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.

  16. The NWRA Classification Infrastructure: description and extension to the Discriminant Analysis Flare Forecasting System (DAFFS)

    Science.gov (United States)

    Leka, K. D.; Barnes, Graham; Wagner, Eric

    2018-04-01

    A classification infrastructure built upon Discriminant Analysis (DA) has been developed at NorthWest Research Associates for examining the statistical differences between samples of two known populations. Originating to examine the physical differences between flare-quiet and flare-imminent solar active regions, we describe herein some details of the infrastructure including: parametrization of large datasets, schemes for handling "null" and "bad" data in multi-parameter analysis, application of non-parametric multi-dimensional DA, an extension through Bayes' theorem to probabilistic classification, and methods invoked for evaluating classifier success. The classifier infrastructure is applicable to a wide range of scientific questions in solar physics. We demonstrate its application to the question of distinguishing flare-imminent from flare-quiet solar active regions, updating results from the original publications that were based on different data and much smaller sample sizes. Finally, as a demonstration of "Research to Operations" efforts in the space-weather forecasting context, we present the Discriminant Analysis Flare Forecasting System (DAFFS), a near-real-time operationally-running solar flare forecasting tool that was developed from the research-directed infrastructure.

  17. Performance Assessment of Black Box Capacity Forecasting for Multi-Market Trade Application

    Directory of Open Access Journals (Sweden)

    Pamela MacDougall

    2017-10-01

    Full Text Available With the growth of renewable generated electricity in the energy mix, large energy storage and flexible demand, particularly aggregated demand response is becoming a front runner as a new participant in the wholesale energy markets. One of the biggest barriers for the integration of aggregator services into market participation is knowledge of the current and future flexible capacity. To calculate the available flexibility, the current aggregator pilot and simulation implementations use lower level measurements and device specifications. This type of implementation is not scalable due to computational constraints, as well as it could conflict with end user privacy rights. Black box machine learning approaches have been proven to accurately estimate the available capacity of a cluster of heating devices using only aggregated data. This study will investigate the accuracy of this approach when applied to a heterogeneous virtual power plant (VPP. Firstly, a sensitivity analysis of the machine learning model is performed when varying the underlying device makeup of the VPP. Further, the forecasted flexible capacity of a heterogeneous residential VPP was applied to a trade strategy, which maintains a day ahead schedule, as well as offers flexibility to the imbalance market. This performance is then compared when using the same strategy with no capacity forecasting, as well as perfect knowledge. It was shown that at most, the highest average error, regardless of the VPP makeup, was still less than 9%. Further, when applying the forecasted capacity to a trading strategy, 89% of the optimal performance can be met. This resulted in a reduction of monthly costs by approximately 20%.

  18. Forecasting Skill

    Science.gov (United States)

    1981-01-01

    for the third and fourth day precipitation forecasts. A marked improvement was shown for the consensus 24 hour precipitation forecast, and small... Zuckerberg (1980) found a small long term skill increase in forecasts of heavy snow events for nine eastern cities. Other National Weather Service...and maximum temperature) are each awarded marks 2, 1, or 0 according to whether the forecast is correct, 8 - *- -**■*- ———"—- - -■ t0m 1 MM—IB I

  19. Effect of caries-affected dentin on one-step universal and multi-step etch-and-rinse adhesives’ bond strength

    Directory of Open Access Journals (Sweden)

    Clecila MÜLLER

    2017-10-01

    Full Text Available Abstract Objective To evaluate the influence of caries-affected dentin on bond strength of a universal one-step and a multi-step etch-and-rinse adhesive system. Material and method Enamel of 60 third human molars with and without caries was removed to expose dentin. The teeth were randomly assigned to six groups: Single Bond Universal (3M ESPE, St. Paul, MN, USA in etch-and-rinse and in self-etch mode and Prime & Bond NT (Dentsply Co, Konstanz, Germany, all on sound and caries-affected dentin. Smear layer of the 30 sound dentin specimens was standardized by polishing with 600-grit SiC paper under water cooling. Residual infected dentin of the 30 caries-affected specimens was removed with a number 4 CA carbide bur until no caries smooth tissue was detectable by tactile-visual inspection. Cylinders of a light cured composite resin (Filtek Z350 XT, 3M ESPE were built up using starch tubes and microshear test was performed until failure. The data was analyzed by one-way ANOVA and Tukey’s post hoc test. Result Significant differences in microshear bond strength (μSBS were observed for the caries-affected groups, but not for sound dentin. The μSBS of Single Bond Universal were not influenced by the application protocol on sound dentin, however they were lower in the caries-affected group with both application protocols. The μSBS for Prime & Bond NT was not influenced by the dentin conditions. Conclusion Caries-affected dentin decrease in bond strength of Single Bond Universal in comparison to sound dentin. The bond strength of Prime & Bond NT was not altered by substrate conditions.

  20. The forecasting method of rock-burst and the application based on overlying multi-strata spatial structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Cun-Wen Wang; Fu-Xing Jiang; Qing-Guo Sun; Chun-Jiang Sun; Ming Zhang; Zhen-Wen Ji; Xiu-Feng Zhang [University of Science and Technology, Beijing, Beijing (China). Civil & Environmental Engineering School

    2009-02-15

    Based on the overlying multi-strata spatial structure theory and mechanical analysis, the paper discusses the relationship between 'S' shape spatial strata structure movement and mining stress distribution. Coal out-burst forecasting based on 'S' shape spatial strata structure movement was studied. Microseismic monitoring in Huafeng Coal Mine in Shandong Province showed that coal out-burst will ocur when the advancing distance of the longwall face is equal to one, two or three times the length of the longwall face respectively. During these periods, the mined areas approach to square shape while the 'S' shape spatial strata structure acts strongly. Based on this, the time and position of coal out-burst in No. 1410 longwall face of Huafeng Coal Mine was predicted. By using large diameter and deep drill holes in the coal seam and deep drill holes with blasting in the roof at those danger areas, the No. 1410 longwall face safely advanced through those danger areas. During those periods the microseismic monitoring system detected very strong mine quakes. 15 refs., 1 fig., 2 tabs.

  1. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  2. Evaluating information in multiple horizon forecasts. The DOE's energy price forecasts

    International Nuclear Information System (INIS)

    Sanders, Dwight R.; Manfredo, Mark R.; Boris, Keith

    2009-01-01

    The United States Department of Energy's (DOE) quarterly price forecasts for energy commodities are examined to determine the incremental information provided at the one-through four-quarter forecast horizons. A direct test for determining information content at alternative forecast horizons, developed by Vuchelen and Gutierrez [Vuchelen, J. and Gutierrez, M.-I. 'A Direct Test of the Information Content of the OECD Growth Forecasts.' International Journal of Forecasting. 21(2005):103-117.], is used. The results suggest that the DOE's price forecasts for crude oil, gasoline, and diesel fuel do indeed provide incremental information out to three-quarters ahead, while natural gas and electricity forecasts are informative out to the four-quarter horizon. In contrast, the DOE's coal price forecasts at two-, three-, and four-quarters ahead provide no incremental information beyond that provided for the one-quarter horizon. Recommendations of how to use these results for making forecast adjustments is also provided. (author)

  3. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    Science.gov (United States)

    Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period

  4. MSSM Forecast for the LHC

    CERN Document Server

    Cabrera, Maria Eugenia; de Austri, Roberto Ruiz

    2009-01-01

    We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of $M_Z$ is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental i...

  5. Abundance and composition of indigenous bacterial communities in a multi-step biofiltration-based drinking water treatment plant.

    Science.gov (United States)

    Lautenschlager, Karin; Hwang, Chiachi; Ling, Fangqiong; Liu, Wen-Tso; Boon, Nico; Köster, Oliver; Egli, Thomas; Hammes, Frederik

    2014-10-01

    Indigenous bacterial communities are essential for biofiltration processes in drinking water treatment systems. In this study, we examined the microbial community composition and abundance of three different biofilter types (rapid sand, granular activated carbon, and slow sand filters) and their respective effluents in a full-scale, multi-step treatment plant (Zürich, CH). Detailed analysis of organic carbon degradation underpinned biodegradation as the primary function of the biofilter biomass. The biomass was present in concentrations ranging between 2-5 × 10(15) cells/m(3) in all filters but was phylogenetically, enzymatically and metabolically diverse. Based on 16S rRNA gene-based 454 pyrosequencing analysis for microbial community composition, similar microbial taxa (predominantly Proteobacteria, Planctomycetes, Acidobacteria, Bacteriodetes, Nitrospira and Chloroflexi) were present in all biofilters and in their respective effluents, but the ratio of microbial taxa was different in each filter type. This change was also reflected in the cluster analysis, which revealed a change of 50-60% in microbial community composition between the different filter types. This study documents the direct influence of the filter biomass on the microbial community composition of the final drinking water, particularly when the water is distributed without post-disinfection. The results provide new insights on the complexity of indigenous bacteria colonizing drinking water systems, especially in different biofilters of a multi-step treatment plant. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.

    Science.gov (United States)

    Araújo, Ricardo de A

    2012-04-01

    Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    Science.gov (United States)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being

  8. Inflow forecasting using Artificial Neural Networks for reservoir operation

    Directory of Open Access Journals (Sweden)

    C. Chiamsathit

    2016-05-01

    Full Text Available In this study, multi-layer perceptron (MLP artificial neural networks have been applied to forecast one-month-ahead inflow for the Ubonratana reservoir, Thailand. To assess how well the forecast inflows have performed in the operation of the reservoir, simulations were carried out guided by the systems rule curves. As basis of comparison, four inflow situations were considered: (1 inflow known and assumed to be the historic (Type A; (2 inflow known and assumed to be the forecast (Type F; (3 inflow known and assumed to be the historic mean for month (Type M; and (4 inflow is unknown with release decision only conditioned on the starting reservoir storage (Type N. Reservoir performance was summarised in terms of reliability, resilience, vulnerability and sustainability. It was found that Type F inflow situation produced the best performance while Type N was the worst performing. This clearly demonstrates the importance of good inflow information for effective reservoir operation.

  9. Seasonal forecasting of groundwater levels in natural aquifers in the United Kingdom

    Science.gov (United States)

    Mackay, Jonathan; Jackson, Christopher; Pachocka, Magdalena; Brookshaw, Anca; Scaife, Adam

    2014-05-01

    Groundwater aquifers comprise the world's largest freshwater resource and provide resilience to climate extremes which could become more frequent under future climate changes. Prolonged dry conditions can induce groundwater drought, often characterised by significantly low groundwater levels which may persist for months to years. In contrast, lasting wet conditions can result in anomalously high groundwater levels which result in flooding, potentially at large economic cost. Using computational models to produce groundwater level forecasts allows appropriate management strategies to be considered in advance of extreme events. The majority of groundwater level forecasting studies to date use data-based models, which exploit the long response time of groundwater levels to meteorological drivers and make forecasts based only on the current state of the system. Instead, seasonal meteorological forecasts can be used to drive hydrological models and simulate groundwater levels months into the future. Such approaches have not been used in the past due to a lack of skill in these long-range forecast products. However systems such as the latest version of the Met Office Global Seasonal Forecast System (GloSea5) are now showing increased skill up to a 3-month lead time. We demonstrate the first groundwater level ensemble forecasting system using a multi-member ensemble of hindcasts from GloSea5 between 1996 and 2009 to force 21 simple lumped conceptual groundwater models covering most of the UK's major aquifers. We present the results from this hindcasting study and demonstrate that the system can be used to forecast groundwater levels with some skill up to three months into the future.

  10. Fast plane wave density functional theory molecular dynamics calculations on multi-GPU machines

    International Nuclear Information System (INIS)

    Jia, Weile; Fu, Jiyun; Cao, Zongyan; Wang, Long; Chi, Xuebin; Gao, Weiguo; Wang, Lin-Wang

    2013-01-01

    Plane wave pseudopotential (PWP) density functional theory (DFT) calculation is the most widely used method for material simulations, but its absolute speed stagnated due to the inability to use large scale CPU based computers. By a drastic redesign of the algorithm, and moving all the major computation parts into GPU, we have reached a speed of 12 s per molecular dynamics (MD) step for a 512 atom system using 256 GPU cards. This is about 20 times faster than the CPU version of the code regardless of the number of CPU cores used. Our tests and analysis on different GPU platforms and configurations shed lights on the optimal GPU deployments for PWP-DFT calculations. An 1800 step MD simulation is used to study the liquid phase properties of GaInP

  11. National Forecast Charts

    Science.gov (United States)

    code. Press enter or select the go button to submit request Local forecast by "City, St" or Prediction Center on Twitter NCEP Quarterly Newsletter WPC Home Analyses and Forecasts National Forecast to all federal, state, and local government web resources and services. National Forecast Charts

  12. ARTIFICIAL NEURAL NETWORK AND WAVELET DECOMPOSITION IN THE FORECAST OF GLOBAL HORIZONTAL SOLAR RADIATION

    Directory of Open Access Journals (Sweden)

    Luiz Albino Teixeira Júnior

    2015-04-01

    Full Text Available This paper proposes a method (denoted by WD-ANN that combines the Artificial Neural Networks (ANN and the Wavelet Decomposition (WD to generate short-term global horizontal solar radiation forecasting, which is an essential information for evaluating the electrical power generated from the conversion of solar energy into electrical energy. The WD-ANN method consists of two basic steps: firstly, it is performed the decomposition of level p of the time series of interest, generating p + 1 wavelet orthonormal components; secondly, the p + 1 wavelet orthonormal components (generated in the step 1 are inserted simultaneously into an ANN in order to generate short-term forecasting. The results showed that the proposed method (WD-ANN improved substantially the performance over the (traditional ANN method.

  13. Forecasting hotspots using predictive visual analytics approach

    Science.gov (United States)

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  14. Improving operational flood forecasting through data assimilation

    Science.gov (United States)

    Rakovec, Oldrich; Weerts, Albrecht; Uijlenhoet, Remko; Hazenberg, Pieter; Torfs, Paul

    2010-05-01

    Accurate flood forecasts have been a challenging topic in hydrology for decades. Uncertainty in hydrological forecasts is due to errors in initial state (e.g. forcing errors in historical mode), errors in model structure and parameters and last but not least the errors in model forcings (weather forecasts) during the forecast mode. More accurate flood forecasts can be obtained through data assimilation by merging observations with model simulations. This enables to identify the sources of uncertainties in the flood forecasting system. Our aim is to assess the different sources of error that affect the initial state and to investigate how they propagate through hydrological models with different levels of spatial variation, starting from lumped models. The knowledge thus obtained can then be used in a data assimilation scheme to improve the flood forecasts. This study presents the first results of this framework and focuses on quantifying precipitation errors and its effect on discharge simulations within the Ourthe catchment (1600 km2), which is situated in the Belgian Ardennes and is one of the larger subbasins of the Meuse River. Inside the catchment, hourly rain gauge information from 10 different locations is available over a period of 15 years. Based on these time series, the bootstrap method has been applied to generate precipitation ensembles. These were then used to simulate the catchment's discharges at the outlet. The corresponding streamflow ensembles were further assimilated with observed river discharges to update the model states of lumped hydrological models (R-PDM, HBV) through Residual Resampling. This particle filtering technique is a sequential data assimilation method and takes no prior assumption of the probability density function for the model states, which in contrast to the Ensemble Kalman filter does not have to be Gaussian. Our further research will be aimed at quantifying and reducing the sources of uncertainty that affect the initial

  15. A Novel Molten Salt Reactor Concept to Implement the Multi-Step Time-Scheduled Transmutation Strategy

    International Nuclear Information System (INIS)

    Csom, Gyula; Feher, Sandor; Szieberthj, Mate

    2002-01-01

    Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)

  16. Forecasting telecommunication new service demand by analogy method and combined forecast

    Directory of Open Access Journals (Sweden)

    Lin Feng-Jenq

    2005-01-01

    Full Text Available In the modeling forecast field, we are usually faced with the more difficult problems of forecasting market demand for a new service or product. A new service or product is defined as that there is absence of historical data in this new market. We hardly use models to execute the forecasting work directly. In the Taiwan telecommunication industry, after liberalization in 1996, there are many new services opened continually. For optimal investment, it is necessary that the operators, who have been granted the concessions and licenses, forecast this new service within their planning process. Though there are some methods to solve or avoid this predicament, in this paper, we will propose one forecasting procedure that integrates the concept of analogy method and the idea of combined forecast to generate new service forecast. In view of the above, the first half of this paper describes the procedure of analogy method and the approach of combined forecast, and the second half provides the case of forecasting low-tier phone demand in Taiwan to illustrate this procedure's feasibility.

  17. Exploring the interactions between forecast accuracy, risk perception and perceived forecast reliability in reservoir operator's decision to use forecast

    Science.gov (United States)

    Shafiee-Jood, M.; Cai, X.

    2017-12-01

    Advances in streamflow forecasts at different time scales offer a promise for proactive flood management and improved risk management. Despite the huge potential, previous studies have found that water resources managers are often not willing to incorporate streamflow forecasts information in decisions making, particularly in risky situations. While low accuracy of forecasts information is often cited as the main reason, some studies have found that implementation of streamflow forecasts sometimes is impeded by institutional obstacles and behavioral factors (e.g., risk perception). In fact, a seminal study by O'Connor et al. (2005) found that risk perception is the strongest determinant of forecast use while managers' perception about forecast reliability is not significant. In this study, we aim to address this issue again. However, instead of using survey data and regression analysis, we develop a theoretical framework to assess the user-perceived value of streamflow forecasts. The framework includes a novel behavioral component which incorporates both risk perception and perceived forecast reliability. The framework is then used in a hypothetical problem where reservoir operator should react to probabilistic flood forecasts with different reliabilities. The framework will allow us to explore the interactions among risk perception and perceived forecast reliability, and among the behavioral components and information accuracy. The findings will provide insights to improve the usability of flood forecasts information through better communication and education.

  18. Shared investment projects and forecasting errors: setting framework conditions for coordination and sequencing data quality activities.

    Science.gov (United States)

    Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra

    2015-01-01

    In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments' efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that-in some setups-a certain extent of misforecasting is desirable from the firm's point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that-in particular for relatively good forecasters-most of our results are robust to changes in setting the parameters of our multi-agent simulation model.

  19. Shared investment projects and forecasting errors: setting framework conditions for coordination and sequencing data quality activities.

    Directory of Open Access Journals (Sweden)

    Stephan Leitner

    Full Text Available In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments' efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that-in some setups-a certain extent of misforecasting is desirable from the firm's point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that-in particular for relatively good forecasters-most of our results are robust to changes in setting the parameters of our multi-agent simulation model.

  20. Forecasting air quality time series using deep learning.

    Science.gov (United States)

    Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse

    2018-04-13

    This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution

  1. The micro-step motor controller

    International Nuclear Information System (INIS)

    Hong, Kwang Pyo; Lee, Chang Hee; Moon, Myung Kook; Choi, Bung Hun; Choi, Young Hyun; Cheon, Jong Gu

    2004-11-01

    The developed micro-step motor controller can handle 4 axes stepping motor drivers simultaneously and provide high power bipolar driving mechanism with constant current mode. It can be easily controlled by manual key functions and the motor driving status is displayed by the front panel VFD. Due to the development of several kinds of communication and driving protocol, PC can operate even several micro-step motor controllers at once by multi-drop connection

  2. Dynamic Statistical Models for Pyroclastic Density Current Generation at Soufrière Hills Volcano

    Science.gov (United States)

    Wolpert, Robert L.; Spiller, Elaine T.; Calder, Eliza S.

    2018-05-01

    To mitigate volcanic hazards from pyroclastic density currents, volcanologists generate hazard maps that provide long-term forecasts of areas of potential impact. Several recent efforts in the field develop new statistical methods for application of flow models to generate fully probabilistic hazard maps that both account for, and quantify, uncertainty. However a limitation to the use of most statistical hazard models, and a key source of uncertainty within them, is the time-averaged nature of the datasets by which the volcanic activity is statistically characterized. Where the level, or directionality, of volcanic activity frequently changes, e.g. during protracted eruptive episodes, or at volcanoes that are classified as persistently active, it is not appropriate to make short term forecasts based on longer time-averaged metrics of the activity. Thus, here we build, fit and explore dynamic statistical models for the generation of pyroclastic density current from Soufrière Hills Volcano (SHV) on Montserrat including their respective collapse direction and flow volumes based on 1996-2008 flow datasets. The development of this approach allows for short-term behavioral changes to be taken into account in probabilistic volcanic hazard assessments. We show that collapses from the SHV lava dome follow a clear pattern, and that a series of smaller flows in a given direction often culminate in a larger collapse and thereafter directionality of the flows change. Such models enable short term forecasting (weeks to months) that can reflect evolving conditions such as dome and crater morphology changes and non-stationary eruptive behavior such as extrusion rate variations. For example, the probability of inundation of the Belham Valley in the first 180 days of a forecast period is about twice as high for lava domes facing Northwest toward that valley as it is for domes pointing East toward the Tar River Valley. As rich multi-parametric volcano monitoring dataset become

  3. FEM simulation of multi step forming of thick sheet

    NARCIS (Netherlands)

    Wisselink, H.H.; Huetink, Han

    2004-01-01

    A case study has been performed on the forming of an industrial product. This product, a bracket, is made of 5mm thick sheet in multiple steps. The process exists of a bending step followed by a drawing and a flanging step. FEM simulations have been used to investigate this forming process. First,

  4. Numerical simulation of 3-D incompressible, multi-phase flows over cavitating projectiles

    Energy Technology Data Exchange (ETDEWEB)

    Owis, F.M.; Nayfeh, A.H. [Blacksburg State University, Dept. of Engineering Science and Mechanics, MC 0219, Virginia Polytechnic Institute, VA (United States)

    2004-04-01

    The hydrodynamic cavitation over axisymmetric projectiles is computed using the unsteady incompressible Navier-Stokes equations for multi-fluid elements. The governing equations are discretized on a structured grid using an upwind difference scheme with flux limits. A preconditioning dual-time stepping method is used for the unsteady computations. The Eigen-system is derived for the Jacobian matrices. This Eigen-system is suitable for high-density ratio multi-fluid flows and it provides high numerical stability and fast convergence. This method can be used to compute single- as well as multi-phase flows. Cavitating flows over projectiles with different geometries are computed and the results are in good agreement with available experimental data and other published computations. (authors)

  5. The Discriminant Analysis Flare Forecasting System (DAFFS)

    Science.gov (United States)

    Leka, K. D.; Barnes, Graham; Wagner, Eric; Hill, Frank; Marble, Andrew R.

    2016-05-01

    The Discriminant Analysis Flare Forecasting System (DAFFS) has been developed under NOAA/Small Business Innovative Research funds to quantitatively improve upon the NOAA/SWPC flare prediction. In the Phase-I of this project, it was demonstrated that DAFFS could indeed improve by the requested 25% most of the standard flare prediction data products from NOAA/SWPC. In the Phase-II of this project, a prototype has been developed and is presently running autonomously at NWRA.DAFFS uses near-real-time data from NOAA/GOES, SDO/HMI, and the NSO/GONG network to issue both region- and full-disk forecasts of solar flares, based on multi-variable non-parametric Discriminant Analysis. Presently, DAFFS provides forecasts which match those provided by NOAA/SWPC in terms of thresholds and validity periods (including 1-, 2-, and 3- day forecasts), although issued twice daily. Of particular note regarding DAFFS capabilities are the redundant system design, automatically-generated validation statistics and the large range of customizable options available. As part of this poster, a description of the data used, algorithm, performance and customizable options will be presented, as well as a demonstration of the DAFFS prototype.DAFFS development at NWRA is supported by NOAA/SBIR contracts WC-133R-13-CN-0079 and WC-133R-14-CN-0103, with additional support from NASA contract NNH12CG10C, plus acknowledgment to the SDO/HMI and NSO/GONG facilities and NOAA/SWPC personnel for data products, support, and feedback. DAFFS is presently ready for Phase-III development.

  6. Contributions of dopamine-related genes and environmental factors to highly sensitive personality: a multi-step neuronal system-level approach.

    Directory of Open Access Journals (Sweden)

    Chunhui Chen

    Full Text Available Traditional behavioral genetic studies (e.g., twin, adoption studies have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP. 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001. Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional

  7. The forecaster's added value

    Science.gov (United States)

    Turco, M.; Milelli, M.

    2009-09-01

    skill scores of two competitive forecast. It is important to underline that the conclusions refer to the analysis of the Piemonte operational alert system, so they cannot be directly taken as universally true. But we think that some of the main lessons that can be derived from this study could be useful for the meteorological community. In details, the main conclusions are the following: - despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use, that is, the subjective HQPF continues to offer the best performance; - in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterisation and communication of the forecast uncertainty to end users cannot be replaced by any computer code; - eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical techniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.

  8. Spatial Forecast of Landslides in Three Gorges Based On Spatial Data Mining

    Directory of Open Access Journals (Sweden)

    Xianmin Wang

    2009-03-01

    Full Text Available The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc. China-Brazil Earth Resources Satellite (Cbers images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods.

  9. Influence of wind energy forecast in deterministic and probabilistic sizing of reserves

    Energy Technology Data Exchange (ETDEWEB)

    Gil, A.; Torre, M. de la; Dominguez, T.; Rivas, R. [Red Electrica de Espana (REE), Madrid (Spain). Dept. Centro de Control Electrico

    2010-07-01

    One of the challenges in large-scale wind energy integration in electrical systems is coping with wind forecast uncertainties at the time of sizing generation reserves. These reserves must be sized large enough so that they don't compromise security of supply or the balance of the system, but economic efficiency must be also kept in mind. This paper describes two methods of sizing spinning reserves taking into account wind forecast uncertainties, deterministic using a probabilistic wind forecast and probabilistic using stochastic variables. The deterministic method calculates the spinning reserve needed by adding components each of them in order to overcome one single uncertainty: demand errors, the biggest thermal generation loss and wind forecast errors. The probabilistic method assumes that demand forecast errors, short-term thermal group unavailability and wind forecast errors are independent stochastic variables and calculates the probability density function of the three variables combined. These methods are being used in the case of the Spanish peninsular system, in which wind energy accounted for 14% of the total electrical energy produced in the year 2009 and is one of the systems in the world with the highest wind penetration levels. (orig.)

  10. DeMand: A tool for evaluating and comparing device-level demand and supply forecast models

    DEFF Research Database (Denmark)

    Neupane, Bijay; Siksnys, Laurynas; Pedersen, Torben Bach

    2016-01-01

    Fine-grained device-level predictions of both shiftable and non-shiftable energy demand and supply is vital in order to take advantage of Demand Response (DR) for efficient utilization of Renewable Energy Sources. The selection of an effective device-level load forecast model is a challenging task......, mainly due to the diversity of the models and the lack of proper tools and datasets that can be used to validate them. In this paper, we introduce the DeMand system for fine-tuning, analyzing, and validating the device-level forecast models. The system offers several built-in device-level measurement...... datasets, forecast models, features, and errors measures, thus semi-automating most of the steps of the forecast model selection and validation process. This paper presents the architecture and data model of the DeMand system; and provides a use-case example on how one particular forecast model...

  11. Do we need demographic data to forecast plant population dynamics?

    Science.gov (United States)

    Tredennick, Andrew T.; Hooten, Mevin B.; Adler, Peter B.

    2017-01-01

    Rapid environmental change has generated growing interest in forecasts of future population trajectories. Traditional population models built with detailed demographic observations from one study site can address the impacts of environmental change at particular locations, but are difficult to scale up to the landscape and regional scales relevant to management decisions. An alternative is to build models using population-level data that are much easier to collect over broad spatial scales than individual-level data. However, it is unknown whether models built using population-level data adequately capture the effects of density-dependence and environmental forcing that are necessary to generate skillful forecasts.Here, we test the consequences of aggregating individual responses when forecasting the population states (percent cover) and trajectories of four perennial grass species in a semi-arid grassland in Montana, USA. We parameterized two population models for each species, one based on individual-level data (survival, growth and recruitment) and one on population-level data (percent cover), and compared their forecasting accuracy and forecast horizons with and without the inclusion of climate covariates. For both models, we used Bayesian ridge regression to weight the influence of climate covariates for optimal prediction.In the absence of climate effects, we found no significant difference between the forecast accuracy of models based on individual-level data and models based on population-level data. Climate effects were weak, but increased forecast accuracy for two species. Increases in accuracy with climate covariates were similar between model types.In our case study, percent cover models generated forecasts as accurate as those from a demographic model. For the goal of forecasting, models based on aggregated individual-level data may offer a practical alternative to data-intensive demographic models. Long time series of percent cover data already exist

  12. On practical challenges of decomposition-based hybrid forecasting algorithms for wind speed and solar irradiation

    International Nuclear Information System (INIS)

    Wang, Yamin; Wu, Lei

    2016-01-01

    This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to

  13. Forecasting Global Rainfall for Points Using ECMWF's Global Ensemble and Its Applications in Flood Forecasting

    Science.gov (United States)

    Pillosu, F. M.; Hewson, T.; Mazzetti, C.

    2017-12-01

    Prediction of local extreme rainfall has historically been the remit of nowcasting and high resolution limited area modelling, which represent only limited areas, may not be spatially accurate, give reasonable results only for limited lead times (based statistical post-processing software ("ecPoint-Rainfall, ecPR", operational in 2017) that uses ECMWF Ensemble (ENS) output to deliver global probabilistic rainfall forecasts for points up to day 10. Firstly, ecPR applies a new notion of "remote calibration", which 1) allows us to replicate a multi-centennial training period using only one year of data, and 2) provides forecasts for anywhere in the world. Secondly, the software applies an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals, and of where biases in the model can be improved upon. A long-term verification has shown that the post-processed rainfall has better reliability and resolution at every lead time if compared with ENS, and for large totals, ecPR outputs have the same skill at day 5 that the raw ENS has at day 1 (ROC area metric). ecPR could be used as input for hydrological models if its probabilistic output is modified accordingly to the inputs requirements for hydrological models. Indeed, ecPR does not provide information on where the highest total is likely to occur inside the gridbox, nor on the spatial distribution of rainfall values nearby. "Scenario forecasts" could be a solution. They are derived from locating the rainfall peak in sensitive positions (e.g. urban areas), and then redistributing the remaining quantities in the gridbox modifying traditional spatial correlation characterization methodologies (e.g. variogram analysis) in order to take account, for instance, of the type of rainfall forecast (stratiform, convective). Such an approach could be a turning point in the field of medium-range global real-time riverine flood forecasts. This presentation will

  14. Forecasting world natural gas supply

    International Nuclear Information System (INIS)

    Al-Fattah, S. M.; Startzman, R. A.

    2000-01-01

    Using the multi-cyclic Hubert approach, a 53 country-specific gas supply model was developed which enables production forecasts for virtually all of the world's gas. Supply models for some organizations such as OPEC, non-OPEC and OECD were also developed and analyzed. Results of the modeling study indicate that the world's supply of natural gas will peak in 2014, followed by an annual decline at the rate of one per cent per year. North American gas production is reported to be currently at its peak with 29 Tcf/yr; Western Europe will reach its peak supply in 2002 with 12 Tcf. According to this forecast the main sources of natural gas supply in the future will be the countries of the former Soviet Union and the Middle East. Between them, they possess about 62 per cent of the world's ultimate recoverable natural gas (4,880 Tcf). It should be noted that these estimates do not include unconventional gas resulting from tight gas reservoirs, coalbed methane, gas shales and gas hydrates. These unconventional sources will undoubtedly play an important role in the gas supply in countries such as the United States and Canada. 18 refs., 2 tabs., 18 figs

  15. MONTEBURNS 2.0: An Automated, Multi-Step Monte Carlo Burnup Code System

    International Nuclear Information System (INIS)

    2007-01-01

    A - Description of program or function: MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2, the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC, nor from the NEA Databank. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included. The following changes have been made: 1) An increase in the number of removal group information that must be provided for each material in each step in the feed input file. 2) The capability to use CINDER90 instead of ORIGEN2.1 as the depletion/decay part of the code. 3) ORIGEN2.2 can also be used instead of ORIGEN2.1 in Monteburns. 4) The correction of including the capture cross sections to metastable as well as ground states if applicable for an isotope (i.e. Am-241 and Am-243 in particular). 5) The ability to use a MCNP input file that has a title card starting with 'm' (this was a bug in the first version of Monteburns). 6) A decrease in run time for cases involving decay-only steps (power of 0.0). Monteburns does not run MCNP to calculate cross sections for a step unless it is an irradiation step. 7) The ability to change the cross section libraries used each step. If different cross section libraries are desired for multiple steps. 8

  16. Probabilistic forecasts of wind power generation accounting for geographically dispersed information

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Trombe, Pierre-Julien

    2014-01-01

    be optimized by accounting for spatio-temporal effects that are so far merely considered. The way these effects may be included in relevant models is described for the case of both parametric and nonparametric approaches to generating probabilistic forecasts. The resulting predictions are evaluated on the real...... of the first order moments of predictive densities. The best performing approach, based on adaptive quantile regression, using spatially corrected point forecasts as input, consistently outperforms the state-of-theartbenchmark based on local information only, by 1.5%-4.6%, depending upon the lead time....

  17. Energy sprawl, land taking and distributed generation: towards a multi-layered density

    International Nuclear Information System (INIS)

    Moroni, Stefano; Antoniucci, Valentina; Bisello, Adriano

    2016-01-01

    The transition from fossil fuels to renewable resources is highly desirable to reduce air pollution, and improve energy efficiency and security. Many observers are concerned, however, that the diffusion of systems based on renewable resources may give rise to energy sprawl, i.e. an increasing occupation of available land to build new energy facilities of this kind. These critics foresee a transition from the traditional fossil-fuel systems, towards a renewable resource system likewise based on large power stations and extensive energy grids. A different approach can be taken to reduce the risk of energy sprawl, and this will happen if the focus is as much on renewable sources as on the introduction of distributed renewable energy systems based on micro plants (photovoltaic panels on the roofs of buildings, micro wind turbines, etc.) and on multiple micro-grids. Policy makers could foster local energy enterprises by: introducing new enabling rules; making more room for contractual communities; simplifying the compliance process; proposing monetary incentives and tax cuts. We conclude that the diffusion of innovation in this field will lead not to an energy sprawl but to a new energy system characterized by a multi-layered density: a combination of technology, organization, and physical development. - Highlights: • Energy sprawl is not a necessary consequence of the transition to renewable sources. • A polycentric, distributed renewable energy system reduces land consumption. • This polycentric model is founded on building-related renewable energy production and micro-grids. • Enabling rules, simplified compliance, and tax cuts can foster this result. • The concept of multi-layered density is proposed as a new framework for interpreting this scenario.

  18. A Model for Forecasting Enlisted Student IA Billet Requirements

    Science.gov (United States)

    2016-03-01

    IA execution ........................................... 4   Figure 2.   Learning center to resource sponsor alignment...initial entry training” [3]. 4 Figure 1. Training that counts as student IA execution The second type of chargeable training is the...Reference [ 4 ] describes the process the Navy uses to build the student IA program. It comprises three steps. The first is to forecast student IA E

  19. Annual electricity consumption forecasting by neural network in high energy consuming industrial sectors

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Sohrabkhani, S.

    2008-01-01

    This paper presents an artificial neural network (ANN) approach for annual electricity consumption in high energy consumption industrial sectors. Chemicals, basic metals and non-metal minerals industries are defined as high energy consuming industries. It is claimed that, due to high fluctuations of energy consumption in high energy consumption industries, conventional regression models do not forecast energy consumption correctly and precisely. Although ANNs have been typically used to forecast short term consumptions, this paper shows that it is a more precise approach to forecast annual consumption in such industries. Furthermore, the ANN approach based on a supervised multi-layer perceptron (MLP) is used to show it can estimate the annual consumption with less error. Actual data from high energy consuming (intensive) industries in Iran from 1979 to 2003 is used to illustrate the applicability of the ANN approach. This study shows the advantage of the ANN approach through analysis of variance (ANOVA). Furthermore, the ANN forecast is compared with actual data and the conventional regression model through ANOVA to show its superiority. This is the first study to present an algorithm based on the ANN and ANOVA for forecasting long term electricity consumption in high energy consuming industries

  20. Using Cloud-to-Ground Lightning Climatologies to Initialize Gridded Lightning Threat Forecasts for East Central Florida

    Science.gov (United States)

    Lambert, Winnie; Sharp, David; Spratt, Scott; Volkmer, Matthew

    2005-01-01

    Each morning, the forecasters at the National Weather Service in Melbourn, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in central Florida, especially during the warm season months of May-September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC (0700 AM EST) each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to increase consistency between forecasters while enabling them to focus on

  1. The Next Level in Automated Solar Flare Forecasting: the EU FLARECAST Project

    Science.gov (United States)

    Georgoulis, M. K.; Bloomfield, D.; Piana, M.; Massone, A. M.; Gallagher, P.; Vilmer, N.; Pariat, E.; Buchlin, E.; Baudin, F.; Csillaghy, A.; Soldati, M.; Sathiapal, H.; Jackson, D.; Alingery, P.; Argoudelis, V.; Benvenuto, F.; Campi, C.; Florios, K.; Gontikakis, C.; Guennou, C.; Guerra, J. A.; Kontogiannis, I.; Latorre, V.; Murray, S.; Park, S. H.; Perasso, A.; Sciacchitano, F.; von Stachelski, S.; Torbica, A.; Vischi, D.

    2017-12-01

    We attempt an informative description of the Flare Likelihood And Region Eruption Forecasting (FLARECAST) project, European Commission's first large-scale investment to explore the limits of reliability and accuracy achieved for the forecasting of major solar flares. We outline the consortium, top-level objectives and first results of the project, highlighting the diversity and fusion of expertise needed to deliver what was promised. The project's final product, featuring an openly accessible, fully modular and free to download flare forecasting facility will be delivered in early 2018. The project's three objectives, namely, science, research-to-operations and dissemination / communication, are also discussed: in terms of science, we encapsulate our close-to-final assessment on how close (or far) are we from a practically exploitable solar flare forecasting. In terms of R2O, we briefly describe the architecture of the FLARECAST infrastructure that includes rigorous validation for each forecasting step. From the three different communication levers of the project we finally focus on lessons learned from the two-way interaction with the community of stakeholders and governmental organizations. The FLARECAST project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 640216.

  2. Grand European and Asian-Pacific multi-model seasonal forecasts: maximization of skill and of potential economical value to end-users

    Science.gov (United States)

    Alessandri, Andrea; Felice, Matteo De; Catalano, Franco; Lee, June-Yi; Wang, Bin; Lee, Doo Young; Yoo, Jin-Ho; Weisheimer, Antije

    2018-04-01

    Multi-model ensembles (MMEs) are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model ensembles. Previous works suggested that the potential benefit that can be expected by using a MME amplifies with the increase of the independence of the contributing Seasonal Prediction Systems. In this work we combine the two MME Seasonal Prediction Systems (SPSs) independently developed by the European (ENSEMBLES) and by the Asian-Pacific (APCC/CliPAS) communities. To this aim, all the possible multi-model combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from APCC/CliPAS have been evaluated. The grand ENSEMBLES-APCC/CliPAS MME enhances significantly the skill in predicting 2m temperature and precipitation compared to previous estimates from the contributing MMEs. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and APCC/CliPAS models and that only a limited number of SPSs is required to obtain the maximum performance. The number and selection of models that perform better is usually different depending on the region/phenomenon under consideration so that all models are useful in some cases. It is shown that the incremental performance contribution tends to be higher when adding one model from ENSEMBLES to APCC/CliPAS MMEs and vice versa, confirming that the benefit of using MMEs amplifies with the increase of the independence the contributing models. To verify the above results for a real world application, the Grand ENSEMBLES-APCC/CliPAS MME is used to predict retrospective energy demand over Italy as provided by TERNA (Italian Transmission System Operator) for the period 1990-2007. The results demonstrate the useful application of MME seasonal predictions for energy demand forecasting over Italy. It is shown a significant enhancement of the potential economic value of forecasting energy demand when using the

  3. Active Mycobacterium Infection Due to Intramuscular BCG Administration Following Multi-Steps Medication Errors

    Directory of Open Access Journals (Sweden)

    MohammadReza Rafati

    2015-10-01

    Full Text Available Bacillus Calmette-Guérin (BCG is indicated for treatment of primary or relapsing flat urothelial cell carcinoma in situ (CIS of the urinary bladder. Disseminated infectious complications occasionally occur due to BCG as a vaccine and intravesical therapy.  Intramuscular (IM or Intravenous (IV administrations of BCG are rare medication errors which are more probable to produce systemic infections. This report presents 13 years old case that several steps medication errors occurred consequently from physician handwriting, pharmacy dispensing, nursing administration and patient family. The physician wrote βHCG instead of HCG in the prescription. βHCG was read as BCG by the pharmacy staff and 6 vials of intravesical BCG were administered IM twice a week for 3 consecutive weeks. The patient experienced fever and chills after each injection, but he was admitted 2 months after first IM administration of BCG with fever and pancytopenia. Unfortunately four month after using drug, during second admission duo to cellulitis at the sites of BCG injection the physicians diagnosed the medication error. Using handwritten prescription and inappropriate abbreviations, spending inadequate time for taking a brief medical history in pharmacy, lack of verifying name, dose and wrote before medication administration and lack of considering medication error as an important differential diagnosis had roles to occur this multi-steps medication error.

  4. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    Science.gov (United States)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  5. Modelling and Forecasting of Rice Yield in support of Crop Insurance

    Science.gov (United States)

    Weerts, A.; van Verseveld, W.; Trambauer, P.; de Vries, S.; Conijn, S.; van Valkengoed, E.; Hoekman, D.; Hengsdijk, H.; Schrevel, A.

    2016-12-01

    The Government of Indonesia has embarked on a policy to bring crop insurance to all of Indonesia's farmers. To support the Indonesian government, the G4INDO project (www.g4indo.org) is developing/constructing an integrated platform for judging and handling insurance claims. The platform consists of bringing together remote sensed data (both visible and radar) and hydrologic and crop modelling and forecasting to improve predictions in one forecasting platform (i.e. Delft-FEWS, Werner et al., 2013). The hydrological model and crop model (LINTUL) are coupled on time stepping basis in the OpenStreams framework (see https://github.com/openstreams/wflow) and deployed in a Delft-FEWS forecasting platform to support seasonal forecasting of water availability and crop yield. First we will show the general idea about the project, the integrated platform (including Sentinel 1 & 2 data) followed by first (reforecast) results of the coupled models for predicting water availability and crop yield in the Brantas catchment in Java, Indonesia. Werner, M., Schellekens, J., Gijsbers, P., Van Dijk, M., Van den Akker, O. and Heynert K, 2013. The Delft-FEWS flow forecasting system, Environmental Modelling & Software; 40:65-77. DOI: 10.1016/j.envsoft.2012.07.010 .

  6. Forecasting oil price trends using wavelets and hidden Markov models

    International Nuclear Information System (INIS)

    Souza e Silva, Edmundo G. de; Souza e Silva, Edmundo A. de; Legey, Luiz F.L.

    2010-01-01

    The crude oil price is influenced by a great number of factors, most of which interact in very complex ways. For this reason, forecasting it through a fundamentalist approach is a difficult task. An alternative is to use time series methodologies, with which the price's past behavior is conveniently analyzed, and used to predict future movements. In this paper, we investigate the usefulness of a nonlinear time series model, known as hidden Markov model (HMM), to predict future crude oil price movements. Using an HMM, we develop a forecasting methodology that consists of, basically, three steps. First, we employ wavelet analysis to remove high frequency price movements, which can be assumed as noise. Then, the HMM is used to forecast the probability distribution of the price return accumulated over the next F days. Finally, from this distribution, we infer future price trends. Our results indicate that the proposed methodology might be a useful decision support tool for agents participating in the crude oil market. (author)

  7. Performance of fuzzy approach in Malaysia short-term electricity load forecasting

    Science.gov (United States)

    Mansor, Rosnalini; Zulkifli, Malina; Yusof, Muhammad Mat; Ismail, Mohd Isfahani; Ismail, Suzilah; Yin, Yip Chee

    2014-12-01

    Many activities such as economic, education and manafucturing would paralyse with limited supply of electricity but surplus contribute to high operating cost. Therefore electricity load forecasting is important in order to avoid shortage or excess. Previous finding showed festive celebration has effect on short-term electricity load forecasting. Being a multi culture country Malaysia has many major festive celebrations such as Eidul Fitri, Chinese New Year and Deepavali but they are moving holidays due to non-fixed dates on the Gregorian calendar. This study emphasis on the performance of fuzzy approach in forecasting electricity load when considering the presence of moving holidays. Autoregressive Distributed Lag model was estimated using simulated data by including model simplification concept (manual or automatic), day types (weekdays or weekend), public holidays and lags of electricity load. The result indicated that day types, public holidays and several lags of electricity load were significant in the model. Overall, model simplification improves fuzzy performance due to less variables and rules.

  8. A SOM clustering pattern sequence-based next symbol prediction method for day-ahead direct electricity load and price forecasting

    International Nuclear Information System (INIS)

    Jin, Cheng Hao; Pok, Gouchol; Lee, Yongmi; Park, Hyun-Woo; Kim, Kwang Deuk; Yun, Unil; Ryu, Keun Ho

    2015-01-01

    Highlights: • A novel pattern sequence-based direct time series forecasting method was proposed. • Due to the use of SOM’s topology preserving property, only SOM can be applied. • SCPSNSP only deals with the cluster patterns not each specific time series value. • SCPSNSP performs better than recently developed forecasting algorithms. - Abstract: In this paper, we propose a new day-ahead direct time series forecasting method for competitive electricity markets based on clustering and next symbol prediction. In the clustering step, pattern sequence and their topology relations are obtained from self organizing map time series clustering. In the next symbol prediction step, with each cluster label in the pattern sequence represented as a pair of its topologically identical coordinates, artificial neural network is used to predict the topological coordinates of next day by training the relationship between previous daily pattern sequence and its next day pattern. According to the obtained topology relations, the nearest nonzero hits pattern is assigned to next day so that the whole time series values can be directly forecasted from the assigned cluster pattern. The proposed method was evaluated on Spanish, Australian and New York electricity markets and compared with PSF and some of the most recently published forecasting methods. Experimental results show that the proposed method outperforms the best forecasting methods at least 3.64%

  9. Inflation Forecast Contracts

    OpenAIRE

    Gersbach, Hans; Hahn, Volker

    2012-01-01

    We introduce a new type of incentive contract for central bankers: inflation forecast contracts, which make central bankers’ remunerations contingent on the precision of their inflation forecasts. We show that such contracts enable central bankers to influence inflation expectations more effectively, thus facilitating more successful stabilization of current inflation. Inflation forecast contracts improve the accuracy of inflation forecasts, but have adverse consequences for output. On balanc...

  10. Modeling and forecasting exchange rate volatility in time-frequency domain

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Křehlík, Tomáš; Vácha, Lukáš

    2016-01-01

    Roč. 251, č. 1 (2016), s. 329-340 ISSN 0377-2217 R&D Projects: GA ČR GA13-32263S EU Projects: European Commission 612955 - FINMAP Institutional support: RVO:67985556 Keywords : Realized GARCH * Wavelet decomposition * Jumps * Multi-period-ahead volatility forecasting Subject RIV: AH - Economics Impact factor: 3.297, year: 2016 http://library.utia.cas.cz/separaty/2016/E/barunik-0456184.pdf

  11. AIRLINE ACTIVITY FORECASTING BY REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Н. Білак

    2012-04-01

    Full Text Available Proposed linear and nonlinear regression models, which take into account the equation of trend and seasonality indices for the analysis and restore the volume of passenger traffic over the past period of time and its prediction for future years, as well as the algorithm of formation of these models based on statistical analysis over the years. The desired model is the first step for the synthesis of more complex models, which will enable forecasting of passenger (income level airline with the highest accuracy and time urgency.

  12. Petascale Diagnostic Assessment of the Global Portfolio Rainfall Space Missions' Ability to Support Flood Forecasting

    Science.gov (United States)

    Reed, P. M.; Chaney, N.; Herman, J. D.; Wood, E. F.; Ferringer, M. P.

    2015-12-01

    This research represents a multi-institutional collaboration between Cornell University, The Aerospace Corporation, and Princeton University that has completed a Petascale diagnostic assessment of the current 10 satellite missions providing rainfall observations. Our diagnostic assessment has required four core tasks: (1) formally linking high-resolution astrodynamics design and coordination of space assets with their global hydrological impacts within a Petascale "many-objective" global optimization framework, (2) developing a baseline diagnostic evaluation of a 1-degree resolution global implementation of the Variable Infiltration Capacity (VIC) model to establish the required satellite observation frequencies and coverage to maintain acceptable global flood forecasts, (3) evaluating the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission, and (4) conceptualizing the next generation spaced-based platforms for water cycle observation. Our team exploited over 100 Million hours of computing access on the 700,000+ core Blue Waters machine to radically advance our ability to discover and visualize key system tradeoffs and sensitivities. This project represents to our knowledge the first attempt to develop a 10,000 member Monte Carlo global hydrologic simulation at one degree resolution that characterizes the uncertain effects of changing the available frequencies of satellite precipitation on drought and flood forecasts. The simulation—optimization components of the work have set a theoretical baseline for the best possible frequencies and coverages for global precipitation given unlimited investment, broad international coordination in reconfiguring existing assets, and new satellite constellation design objectives informed directly by key global hydrologic forecasting requirements. Our research poses a step towards realizing the integrated

  13. Using subseasonal-to-seasonal (S2S) extreme rainfall forecasts for extended-range flood prediction in Australia

    Science.gov (United States)

    White, C. J.; Franks, S. W.; McEvoy, D.

    2015-06-01

    Meteorological and hydrological centres around the world are looking at ways to improve their capacity to be able to produce and deliver skilful and reliable forecasts of high-impact extreme rainfall and flooding events on a range of prediction timescales (e.g. sub-daily, daily, multi-week, seasonal). Making improvements to extended-range rainfall and flood forecast models, assessing forecast skill and uncertainty, and exploring how to apply flood forecasts and communicate their benefits to decision-makers are significant challenges facing the forecasting and water resources management communities. This paper presents some of the latest science and initiatives from Australia on the development, application and communication of extreme rainfall and flood forecasts on the extended-range "subseasonal-to-seasonal" (S2S) forecasting timescale, with a focus on risk-based decision-making, increasing flood risk awareness and preparedness, capturing uncertainty, understanding human responses to flood forecasts and warnings, and the growing adoption of "climate services". The paper also demonstrates how forecasts of flood events across a range of prediction timescales could be beneficial to a range of sectors and society, most notably for disaster risk reduction (DRR) activities, emergency management and response, and strengthening community resilience. Extended-range S2S extreme flood forecasts, if presented as easily accessible, timely and relevant information are a valuable resource to help society better prepare for, and subsequently cope with, extreme flood events.

  14. High-Latitude Neutral Density Structures Investigated by Utilizing Multi-Instrument Satellite Data and NRLMSISE-00 Simulations

    Science.gov (United States)

    Horvath, Ildiko; Lovell, Brian C.

    2018-02-01

    This study investigates various types of neutral density features developed in the cusp region during magnetically active and quiet times. Multi-instrument Challenging Minisatellite Payload data provide neutral density, electron temperature, neutral wind speed, and small-scale field-aligned current (SS-FAC) values. Gravity Recovery and Climate Experiment neutral density data are also employed. During active times, cusp densities or density spikes appeared with their underlying flow channels (FCs) and enhanced SS-FACs implying upwelling, fueled by Joule heating, within/above FCs. Both the moderate nightside cusp enhancements under disturbed conditions and the minor dayside cusp enhancements under quiet conditions developed without any underlying FC and enhanced SS-FACs implying the role of particle precipitation in their development. Observations demonstrate the relations of FCs, density spikes, and upwelling-related divergent flows and their connections to the underlying (1) dayside magnetopause reconnection depositing magnetospheric energy into the high-latitude region and (2) Joule heating-driven disturbance dynamo effects. Results provide observational evidence that the moderate nightside cusp enhancements and the minor dayside cusp enhancements detected developed due to direct heating by weak particle precipitation. Chemical compositions related to the dayside density spike and low cusp densities are modeled by Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar Extended 2000. Modeled composition outputs for the dayside density spike's plasma environment depict some characteristic upwelling signatures. Oppositely, in the case of low dayside cusp densities, composition outputs show opposite characteristics due to the absence of upwelling.

  15. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Shangjie [Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, Tianjin (China); Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Hara, Wendy; Wang, Lei; Buyyounouski, Mark K.; Le, Quynh-Thu; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Li, Ruijiang, E-mail: rli2@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States)

    2017-03-15

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a reference anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.

  16. Daily air quality index forecasting with hybrid models: A case in China

    International Nuclear Information System (INIS)

    Zhu, Suling; Lian, Xiuyuan; Liu, Haixia; Hu, Jianming; Wang, Yuanyuan; Che, Jinxing

    2017-01-01

    Air quality is closely related to quality of life. Air pollution forecasting plays a vital role in air pollution warnings and controlling. However, it is difficult to attain accurate forecasts for air pollution indexes because the original data are non-stationary and chaotic. The existing forecasting methods, such as multiple linear models, autoregressive integrated moving average (ARIMA) and support vector regression (SVR), cannot fully capture the information from series of pollution indexes. Therefore, new effective techniques need to be proposed to forecast air pollution indexes. The main purpose of this research is to develop effective forecasting models for regional air quality indexes (AQI) to address the problems above and enhance forecasting accuracy. Therefore, two hybrid models (EMD-SVR-Hybrid and EMD-IMFs-Hybrid) are proposed to forecast AQI data. The main steps of the EMD-SVR-Hybrid model are as follows: the data preprocessing technique EMD (empirical mode decomposition) is utilized to sift the original AQI data to obtain one group of smoother IMFs (intrinsic mode functions) and a noise series, where the IMFs contain the important information (level, fluctuations and others) from the original AQI series. LS-SVR is applied to forecast the sum of the IMFs, and then, S-ARIMA (seasonal ARIMA) is employed to forecast the residual sequence of LS-SVR. In addition, EMD-IMFs-Hybrid first separately forecasts the IMFs via statistical models and sums the forecasting results of the IMFs as EMD-IMFs. Then, S-ARIMA is employed to forecast the residuals of EMD-IMFs. To certify the proposed hybrid model, AQI data from June 2014 to August 2015 collected from Xingtai in China are utilized as a test case to investigate the empirical research. In terms of some of the forecasting assessment measures, the AQI forecasting results of Xingtai show that the two proposed hybrid models are superior to ARIMA, SVR, GRNN, EMD-GRNN, Wavelet-GRNN and Wavelet-SVR. Therefore, the

  17. New Approach To Hour-By-Hour Weather Forecast

    Science.gov (United States)

    Liao, Q. Q.; Wang, B.

    2017-12-01

    Fine hourly forecast in single station weather forecast is required in many human production and life application situations. Most previous MOS (Model Output Statistics) which used a linear regression model are hard to solve nonlinear natures of the weather prediction and forecast accuracy has not been sufficient at high temporal resolution. This study is to predict the future meteorological elements including temperature, precipitation, relative humidity and wind speed in a local region over a relatively short period of time at hourly level. By means of hour-to-hour NWP (Numeral Weather Prediction)meteorological field from Forcastio (https://darksky.net/dev/docs/forecast) and real-time instrumental observation including 29 stations in Yunnan and 3 stations in Tianjin of China from June to October 2016, predictions are made of the 24-hour hour-by-hour ahead. This study presents an ensemble approach to combine the information of instrumental observation itself and NWP. Use autoregressive-moving-average (ARMA) model to predict future values of the observation time series. Put newest NWP products into the equations derived from the multiple linear regression MOS technique. Handle residual series of MOS outputs with autoregressive (AR) model for the linear property presented in time series. Due to the complexity of non-linear property of atmospheric flow, support vector machine (SVM) is also introduced . Therefore basic data quality control and cross validation makes it able to optimize the model function parameters , and do 24 hours ahead residual reduction with AR/SVM model. Results show that AR model technique is better than corresponding multi-variant MOS regression method especially at the early 4 hours when the predictor is temperature. MOS-AR combined model which is comparable to MOS-SVM model outperform than MOS. Both of their root mean square error and correlation coefficients for 2 m temperature are reduced to 1.6 degree Celsius and 0.91 respectively. The

  18. Underground structure pattern and multi AO reaction with step feed concept for upgrading an large wastewater treatment plant

    Science.gov (United States)

    Peng, Yi; Zhang, Jie; Li, Dong

    2018-03-01

    A large wastewater treatment plant (WWTP) could not meet the new demand of urban environment and the need of reclaimed water in China, using a US treatment technology. Thus a multi AO reaction process (Anaerobic/oxic/anoxic/oxic/anoxic/oxic) WWTP with underground structure was proposed to carry out the upgrade project. Four main new technologies were applied: (1) multi AO reaction with step feed technology; (2) deodorization; (3) new energy-saving technology such as water resource heat pump and optical fiber lighting system; (4) dependable old WWTP’s water quality support measurement during new WWTP’s construction. After construction, upgrading WWTP had saved two thirds land occupation, increased 80% treatment capacity and improved effluent standard by more than two times. Moreover, it had become a benchmark of an ecological negative capital changing to a positive capital.

  19. Using HPC within an operational forecasting configuration

    Science.gov (United States)

    Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.

    2012-04-01

    Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.

  20. Maintaining a Local Data Integration System in Support of Weather Forecast Operations

    Science.gov (United States)

    Watson, Leela R.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian

    2010-01-01

    Since 2000, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) have used a local data integration system (LDIS) as part of their forecast and warning operations. Each has benefited from 3-dimensional analyses that are delivered to forecasters every 15 minutes across the peninsula of Florida. The intent is to generate products that enhance short-range weather forecasts issued in support of NWS MLB and SMG operational requirements within East Central Florida. The current LDIS uses the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) package as its core, which integrates a wide variety of national, regional, and local observational data sets. It assimilates all available real-time data within its domain and is run at a finer spatial and temporal resolution than current national- or regional-scale analysis packages. As such, it provides local forecasters with a more comprehensive and complete understanding of evolving fine-scale weather features. Recent efforts have been undertaken to update the LDIS through the formal tasking process of NASA's Applied Meteorology Unit. The goals include upgrading LDIS with the latest version of ADAS, incorporating new sources of observational data, and making adjustments to shell scripts written to govern the system. A series of scripts run a complete modeling system consisting of the preprocessing step, the main model integration, and the post-processing step. The preprocessing step prepares the terrain, surface characteristics data sets, and the objective analysis for model initialization. Data ingested through ADAS include (but are not limited to) Level II Weather Surveillance Radar- 1988 Doppler (WSR-88D) data from six Florida radars, Geostationary Operational Environmental Satellites (GOES) visible and infrared satellite imagery, surface and upper air observations throughout Florida from NOAA's Earth System Research Laboratory/Global Systems Division

  1. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  2. A Novel Nonlinear Combined Forecasting System for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Chengshi Tian

    2018-03-01

    Full Text Available Short-term load forecasting plays an indispensable role in electric power systems, which is not only an extremely challenging task but also a concerning issue for all society due to complex nonlinearity characteristics. However, most previous combined forecasting models were based on optimizing weight coefficients to develop a linear combined forecasting model, while ignoring that the linear combined model only considers the contribution of the linear terms to improving the model’s performance, which will lead to poor forecasting results because of the significance of the neglected and potential nonlinear terms. In this paper, a novel nonlinear combined forecasting system, which consists of three modules (improved data pre-processing module, forecasting module and the evaluation module is developed for short-term load forecasting. Different from the simple data pre-processing of most previous studies, the improved data pre-processing module based on longitudinal data selection is successfully developed in this system, which further improves the effectiveness of data pre-processing and then enhances the final forecasting performance. Furthermore, the modified support vector machine is developed to integrate all the individual predictors and obtain the final prediction, which successfully overcomes the upper drawbacks of the linear combined model. Moreover, the evaluation module is incorporated to perform a scientific evaluation for the developed system. The half-hourly electrical load data from New South Wales are employed to verify the effectiveness of the developed forecasting system, and the results reveal that the developed nonlinear forecasting system can be employed in the dispatching and planning for smart grids.

  3. Forecasting value-at-risk and expected shortfall using fractionally integrated models of conditional volatility: international evidence

    OpenAIRE

    Degiannakis, Stavros; Floros, Christos; Dent, P.

    2013-01-01

    The present study compares the performance of the long memory FIGARCH model, with that of the short memory GARCH specification, in the forecasting of multi-period Value-at-Risk (VaR) and Expected Shortfall (ES) across 20 stock indices worldwide. The dataset is comprised of daily data covering the period from 1989 to 2009. The research addresses the question of whether or not accounting for long memory in the conditional variance specification improves the accuracy of the VaR and ES forecasts ...

  4. Reconstructing latent dynamical noise for better forecasting observables

    Science.gov (United States)

    Hirata, Yoshito

    2018-03-01

    I propose a method for reconstructing multi-dimensional dynamical noise inspired by the embedding theorem of Muldoon et al. [Dyn. Stab. Syst. 13, 175 (1998)] by regarding multiple predictions as different observables. Then, applying the embedding theorem by Stark et al. [J. Nonlinear Sci. 13, 519 (2003)] for a forced system, I produce time series forecast by supplying the reconstructed past dynamical noise as auxiliary information. I demonstrate the proposed method on toy models driven by auto-regressive models or independent Gaussian noise.

  5. Electricity demand load forecasting of the Hellenic power system using an ARMA model

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.Sp. [ASPETE - School of Pedagogical and Technological Education Department of Electrical Engineering Educators N. Heraklion, 141 21 Athens (Greece); Ekonomou, L.; Chatzarakis, G.E.; Skafidas, P.D. [ASPETE-School of Pedagogical and Technological Education, Department of Electrical Engineering Educators, N. Heraklion, 141 21 Athens (Greece); Karampelas, P. [Hellenic American University, IT Department, 12 Kaplanon Str., 106 80 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24 100 Kalamata (Greece); Katsikas, S.K. [University of Piraeus, Department of Technology Education and Digital Systems, 150 Androutsou St., 18 532 Piraeus (Greece)

    2010-03-15

    Effective modeling and forecasting requires the efficient use of the information contained in the available data so that essential data properties can be extracted and projected into the future. As far as electricity demand load forecasting is concerned time series analysis has the advantage of being statistically adaptive to data characteristics compared to econometric methods which quite often are subject to errors and uncertainties in model specification and knowledge of causal variables. This paper presents a new method for electricity demand load forecasting using the multi-model partitioning theory and compares its performance with three other well established time series analysis techniques namely Corrected Akaike Information Criterion (AICC), Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The suitability of the proposed method is illustrated through an application to actual electricity demand load of the Hellenic power system, proving the reliability and the effectiveness of the method and making clear its usefulness in the studies that concern electricity consumption and electricity prices forecasts. (author)

  6. Development of a fuel depletion sensitivity calculation module for multi-cell problems in a deterministic reactor physics code system CBZ

    International Nuclear Information System (INIS)

    Chiba, Go; Kawamoto, Yosuke; Narabayashi, Tadashi

    2016-01-01

    Highlights: • A new functionality of fuel depletion sensitivity calculations is developed in a code system CBZ. • This is based on the generalized perturbation theory for fuel depletion problems. • The theory with a multi-layer depletion step division scheme is described. • Numerical techniques employed in actual implementation are also provided. - Abstract: A new functionality of fuel depletion sensitivity calculations is developed as one module in a deterministic reactor physics code system CBZ. This is based on the generalized perturbation theory for fuel depletion problems. The theory for fuel depletion problems with a multi-layer depletion step division scheme is described in detail. Numerical techniques employed in actual implementation are also provided. Verification calculations are carried out for a 3 × 3 multi-cell problem consisting of two different types of fuel pins. It is shown that the sensitivities of nuclide number densities after fuel depletion with respect to the nuclear data calculated by the new module agree well with reference sensitivities calculated by direct numerical differentiation. To demonstrate the usefulness of the new module, fuel depletion sensitivities in different multi-cell arrangements are compared and non-negligible differences are observed. Nuclear data-induced uncertainties of nuclide number densities obtained with the calculated sensitivities are also compared.

  7. Long-range forecast of all India summer monsoon rainfall using adaptive neuro-fuzzy inference system: skill comparison with CFSv2 model simulation and real-time forecast for the year 2015

    Science.gov (United States)

    Chaudhuri, S.; Das, D.; Goswami, S.; Das, S. K.

    2016-11-01

    All India summer monsoon rainfall (AISMR) characteristics play a vital role for the policy planning and national economy of the country. In view of the significant impact of monsoon system on regional as well as global climate systems, accurate prediction of summer monsoon rainfall has become a challenge. The objective of this study is to develop an adaptive neuro-fuzzy inference system (ANFIS) for long range forecast of AISMR. The NCEP/NCAR reanalysis data of temperature, zonal and meridional wind at different pressure levels have been taken to construct the input matrix of ANFIS. The membership of the input parameters for AISMR as high, medium or low is estimated with trapezoidal membership function. The fuzzified standardized input parameters and the de-fuzzified target output are trained with artificial neural network models. The forecast of AISMR with ANFIS is compared with non-hybrid multi-layer perceptron model (MLP), radial basis functions network (RBFN) and multiple linear regression (MLR) models. The forecast error analyses of the models reveal that ANFIS provides the best forecast of AISMR with minimum prediction error of 0.076, whereas the errors with MLP, RBFN and MLR models are 0.22, 0.18 and 0.73 respectively. During validation with observations, ANFIS shows its potency over the said comparative models. Performance of the ANFIS model is verified through different statistical skill scores, which also confirms the aptitude of ANFIS in forecasting AISMR. The forecast skill of ANFIS is also observed to be better than Climate Forecast System version 2. The real-time forecast with ANFIS shows possibility of deficit (65-75 cm) AISMR in the year 2015.

  8. The forecaster's added value in QPF

    Science.gov (United States)

    Turco, M.; Milelli, M.

    2010-03-01

    : - despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use: the subjective HQPF continues to offer the best performance for the period +24 h/+48 h (i.e. the warning period in the Piemonte system); - in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterization and communication of the forecast uncertainty to end users cannot be replaced by any computer code; - eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical techniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.

  9. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    Science.gov (United States)

    Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.

    2018-04-01

    A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  10. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    Directory of Open Access Journals (Sweden)

    A. Gelfan

    2018-04-01

    Full Text Available A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia, is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast, and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast. We have studied the following: (1 whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2 whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form outperform the operational forecasts of the April–June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  11. Three-dimensional data assimilation and reanalysis of radiation belt electrons: Observations over two solar cycles, and operational forecasting.

    Science.gov (United States)

    Kellerman, A. C.; Shprits, Y.; Kondrashov, D. A.; Podladchikova, T.; Drozdov, A.; Subbotin, D.; Makarevich, R. A.; Donovan, E.; Nagai, T.

    2015-12-01

    Understanding of the dynamics in Earth's radiation belts is critical to accurate modeling and forecasting of space weather conditions, both which are important for design, and protection of our space-borne assets. In the current study, we utilize the Versatile Electron Radiation Belt (VERB) code, multi-spacecraft measurements, and a split-operator Kalman filter to recontructe the global state of the radiation belt system in the CRRES era and the current era. The reanalysis has revealed a never before seen 4-belt structure in the radiation belts during the March 1991 superstorm, and highlights several important aspects in regards to the the competition between the source, acceleration, loss, and transport of particles. In addition to the above, performing reanalysis in adiabatic coordinates relies on specification of the Earth's magnetic field, and associated observational, and model errors. We determine the observational errors for the Kalman filter directly from cross-spacecraft phase-space density (PSD) conjunctions, and obtain the error in VERB by comparison with reanalysis over a long time period. Specification of errors associated with several magnetic field models provides an important insight into the applicability of such models for radiation belt research. The comparison of CRRES area reanalysis with Van Allen Probe era reanalysis allows us to perform a global comparison of the dynamics of the radiation belts during different parts of the solar cycle and during different solar cycles. The data assimilative model is presently used to perform operational forecasts of the radiation belts (http://rbm.epss.ucla.edu/realtime-forecast/).

  12. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    Science.gov (United States)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  13. Magnetogram Forecast: An All-Clear Space Weather Forecasting System

    Science.gov (United States)

    Barghouty, Nasser; Falconer, David

    2015-01-01

    Solar flares and coronal mass ejections (CMEs) are the drivers of severe space weather. Forecasting the probability of their occurrence is critical in improving space weather forecasts. The National Oceanic and Atmospheric Administration (NOAA) currently uses the McIntosh active region category system, in which each active region on the disk is assigned to one of 60 categories, and uses the historical flare rates of that category to make an initial forecast that can then be adjusted by the NOAA forecaster. Flares and CMEs are caused by the sudden release of energy from the coronal magnetic field by magnetic reconnection. It is believed that the rate of flare and CME occurrence in an active region is correlated with the free energy of an active region. While the free energy cannot be measured directly with present observations, proxies of the free energy can instead be used to characterize the relative free energy of an active region. The Magnetogram Forecast (MAG4) (output is available at the Community Coordinated Modeling Center) was conceived and designed to be a databased, all-clear forecasting system to support the operational goals of NASA's Space Radiation Analysis Group. The MAG4 system automatically downloads nearreal- time line-of-sight Helioseismic and Magnetic Imager (HMI) magnetograms on the Solar Dynamics Observatory (SDO) satellite, identifies active regions on the solar disk, measures a free-energy proxy, and then applies forecasting curves to convert the free-energy proxy into predicted event rates for X-class flares, M- and X-class flares, CMEs, fast CMEs, and solar energetic particle events (SPEs). The forecast curves themselves are derived from a sample of 40,000 magnetograms from 1,300 active region samples, observed by the Solar and Heliospheric Observatory Michelson Doppler Imager. Figure 1 is an example of MAG4 visual output

  14. Medium Range Forecasts Representation (and Long Range Forecasts?)

    Science.gov (United States)

    Vincendon, J.-C.

    2009-09-01

    The progress of the numerical forecasts urges us to interest us in more and more distant ranges. We thus supply more and more forecasts with term of some days. Nevertheless, precautions of use are necessary to give the most reliable and the most relevant possible information. Available in a TV bulletin or on quite other support (Internet, mobile phone), the interpretation and the representation of a medium range forecast (5 - 15 days) must be different from those of a short range forecast. Indeed, the "foresee-ability” of a meteorological phenomenon decreases gradually in the course of the ranges, it decreases all the more quickly that the phenomenon is of small scale. So, at the end of some days, the probability character of a forecast becomes very widely dominating. That is why in Meteo-France the forecasts of D+4 to D+7 are accompanied with a confidence index since around ten years. It is a figure between 1 and 5: the more we approach 5, the more the confidence in the supplied forecast is good. In the practice, an indication is supplied for period D+4 / D+5, the other one for period D+6 / D+7, every day being able to benefit from a different forecast, that is be represented in a independent way. We thus supply a global tendency over 24 hours with less and less precise symbols as the range goes away. Concrete examples will be presented. From now on two years, we also publish forecasts to D+8 / J+9, accompanied with a sign of confidence (" good reliability " or " to confirm "). These two days are grouped together on a single map because for us, the described tendency to this term is relevant on a duration about 48 hours with a spatial scale slightly superior to the synoptic scale. So, we avoid producing more than two zones of types of weather over France and we content with giving an evolution for the temperatures (still, in increase or in decline). Newspapers began to publish this information, it should soon be the case of televisions. It is particularly

  15. Spatial Analytic Hierarchy Process Model for Flood Forecasting: An Integrated Approach

    International Nuclear Information System (INIS)

    Matori, Abd Nasir; Yusof, Khamaruzaman Wan; Hashim, Mustafa Ahmad; Lawal, Dano Umar; Balogun, Abdul-Lateef

    2014-01-01

    Various flood influencing factors such as rainfall, geology, slope gradient, land use, soil type, drainage density, temperature etc. are generally considered for flood hazard assessment. However, lack of appropriate handling/integration of data from different sources is a challenge that can make any spatial forecasting difficult and inaccurate. Availability of accurate flood maps and thorough understanding of the subsurface conditions can adequately enhance flood disasters management. This study presents an approach that attempts to provide a solution to this drawback by combining Geographic Information System (GIS)-based Analytic Hierarchy Process (AHP) model as spatial forecasting tools. In achieving the set objectives, spatial forecasting of flood susceptible zones in the study area was made. A total number of five set of criteria/factors believed to be influencing flood generation in the study area were selected. Priority weights were assigned to each criterion/factor based on Saaty's nine point scale of preference and weights were further normalized through the AHP. The model was integrated into a GIS system in order to produce a flood forecasting map

  16. The strategy of professional forecasting

    DEFF Research Database (Denmark)

    Ottaviani, Marco; Sørensen, Peter Norman

    2006-01-01

    We develop and compare two theories of professional forecasters’ strategic behavior. The first theory, reputational cheap talk, posits that forecasters endeavor to convince the market that they are well informed. The market evaluates their forecasting talent on the basis of the forecasts...... and the realized state. If the market expects forecasters to report their posterior expectations honestly, then forecasts are shaded toward the prior mean. With correct market expectations, equilibrium forecasts are imprecise but not shaded. The second theory posits that forecasters compete in a forecasting...... contest with pre-specified rules. In a winner-take-all contest, equilibrium forecasts are excessively differentiated...

  17. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Coimbra, Carlos F. M. [Univ. of California, San Diego, CA (United States

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior in real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.

  18. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative bounds that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.

  19. Mutual Information-Based Inputs Selection for Electric Load Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Nenad Floranović

    2013-02-01

    Full Text Available Providing accurate load forecast to electric utility corporations is essential in order to reduce their operational costs and increase profits. Hence, training set selection is an important preprocessing step which has to be considered in practice in order to increase the accuracy of load forecasts. The usage of mutual information (MI has been recently proposed in regression tasks, mostly for feature selection and for identifying the real instances from training sets that contains noise and outliers. This paper proposes a methodology for the training set selection in a least squares support vector machines (LS-SVMs load forecasting model. A new application of the concept of MI is presented for the selection of a training set based on MI computation between initial training set instances and testing set instances. Accordingly, several LS-SVMs models have been trained, based on the proposed methodology, for hourly prediction of electric load for one day ahead. The results obtained from a real-world data set indicate that the proposed method increases the accuracy of load forecasting as well as reduces the size of the initial training set needed for model training.

  20. Short-Term Forecasting of Electric Loads Using Nonlinear Autoregressive Artificial Neural Networks with Exogenous Vector Inputs

    Directory of Open Access Journals (Sweden)

    Jaime Buitrago

    2017-01-01

    Full Text Available Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN with exogenous multi-variable input (NARX. The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input. Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. The New England electrical load data are used to train and validate the forecast prediction.