Earthquake number forecasts testing
Kagan, Yan Y.
2017-10-01
and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.
Web-Based Real Time Earthquake Forecasting and Personal Risk Management
Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.
2012-12-01
Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and
Interevent times in a new alarm-based earthquake forecasting model
Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed
2013-09-01
This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the
Earthquake forecasting and warning
Energy Technology Data Exchange (ETDEWEB)
Rikitake, T.
1983-01-01
This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.
Short- and Long-Term Earthquake Forecasts Based on Statistical Models
Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner
2017-04-01
The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.
Measuring the effectiveness of earthquake forecasting in insurance strategies
Mignan, A.; Muir-Wood, R.
2009-04-01
Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.
A prospective earthquake forecast experiment in the western Pacific
Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan
2012-09-01
Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.
An interdisciplinary approach for earthquake modelling and forecasting
Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.
2016-12-01
Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.
Modeling, Forecasting and Mitigating Extreme Earthquakes
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
Segou, Margarita
2016-01-01
I perform a retrospective forecast experiment in the most rapid extensive continental rift worldwide, the western Corinth Gulf (wCG, Greece), aiming to predict shallow seismicity (depth statistics, four physics-based (CRS) models, combining static stress change estimations and the rate-and-state laboratory law and one hybrid model. For the latter models, I incorporate the stress changes imparted from 31 earthquakes with magnitude M ≥ 4.5 at the extended area of wCG. Special attention is given on the 3-D representation of active faults, acting as potential receiver planes for the estimation of static stress changes. I use reference seismicity between 1990 and 1995, corresponding to the learning phase of physics-based models, and I evaluate the forecasts for six months following the 1995 M = 6.4 Aigio earthquake using log-likelihood performance metrics. For the ETAS realizations, I use seismic events with magnitude M ≥ 2.5 within daily update intervals to enhance their predictive power. For assessing the role of background seismicity, I implement a stochastic reconstruction (aka declustering) aiming to answer whether M > 4.5 earthquakes correspond to spontaneous events and identify, if possible, different triggering characteristics between aftershock sequences and swarm-type seismicity periods. I find that: (1) ETAS models outperform CRS models in most time intervals achieving very low rejection ratio RN = 6 per cent, when I test their efficiency to forecast the total number of events inside the study area, (2) the best rejection ratio for CRS models reaches RN = 17 per cent, when I use varying target depths and receiver plane geometry, (3) 75 per cent of the 1995 Aigio aftershocks that occurred within the first month can be explained by static stress changes, (4) highly variable performance on behalf of both statistical and physical models is suggested by large confidence intervals of information gain per earthquake and (5) generic ETAS models can adequately
García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón
2016-05-01
Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.
Adaptively smoothed seismicity earthquake forecasts for Italy
Directory of Open Access Journals (Sweden)
Yan Y. Kagan
2010-11-01
Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.
Gambling scores for earthquake predictions and forecasts
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)
Jordan, T. H.
2010-12-01
The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and
The Value, Protocols, and Scientific Ethics of Earthquake Forecasting
Jordan, Thomas H.
2013-04-01
Earthquakes are different from other common natural hazards because precursory signals diagnostic of the magnitude, location, and time of impending seismic events have not yet been found. Consequently, the short-term, localized prediction of large earthquakes at high probabilities with low error rates (false alarms and failures-to-predict) is not yet feasible. An alternative is short-term probabilistic forecasting based on empirical statistical models of seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains up to 1000 relative to long-term forecasts. The value of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing operational forecasting protocols in this sort of "low-probability environment." This paper will explore the complex interrelations among the valuation of low-probability earthquake forecasting, which must account for social intangibles; the protocols of operational forecasting, which must factor in large uncertainties; and the ethics that guide scientists as participants in the forecasting process, who must honor scientific principles without doing harm. Earthquake forecasts possess no intrinsic societal value; rather, they acquire value through their ability to influence decisions made by users seeking to mitigate seismic risk and improve community resilience to earthquake disasters. According to the recommendations of the International Commission on Earthquake Forecasting (www.annalsofgeophysics.eu/index.php/annals/article/view/5350), operational forecasting systems should appropriately separate the hazard-estimation role of scientists from the decision-making role of civil protection authorities and individuals. They should
Retrospective stress-forecasting of earthquakes
Gao, Yuan; Crampin, Stuart
2015-04-01
Observations of changes in azimuthally varying shear-wave splitting (SWS) above swarms of small earthquakes monitor stress-induced changes to the stress-aligned vertical microcracks pervading the upper crust, lower crust, and uppermost ~400km of the mantle. (The microcracks are intergranular films of hydrolysed melt in the mantle.) Earthquakes release stress, and an appropriate amount of stress for the relevant magnitude must accumulate before each event. Iceland is on an extension of the Mid-Atlantic Ridge, where two transform zones, uniquely run onshore. These onshore transform zones provide semi-continuous swarms of small earthquakes, which are the only place worldwide where SWS can be routinely monitored. Elsewhere SWS must be monitored above temporally-active occasional swarms of small earthquakes, or in infrequent SKS and other teleseismic reflections from the mantle. Observations of changes in SWS time-delays are attributed to stress-induced changes in crack aspect-ratios allowing stress-accumulation and stress-relaxation to be identified. Monitoring SWS in SW Iceland in 1988, stress-accumulation before an impending earthquake was recognised and emails were exchanged between the University of Edinburgh (EU) and the Iceland Meteorological Office (IMO). On 10th November 1988, EU emailed IMO that a M5 earthquake could occur soon on a seismically-active fault plane where seismicity was still continuing following a M5.1 earthquake six-months earlier. Three-days later, IMO emailed EU that a M5 earthquake had just occurred on the specified fault-plane. We suggest this is a successful earthquake stress-forecast, where we refer to the procedure as stress-forecasting earthquakes as opposed to predicting or forecasting to emphasise the different formalism. Lack of funds has prevented us monitoring SWS on Iceland seismograms, however, we have identified similar characteristic behaviour of SWS time-delays above swarms of small earthquakes which have enabled us to
International Aftershock Forecasting: Lessons from the Gorkha Earthquake
Michael, A. J.; Blanpied, M. L.; Brady, S. R.; van der Elst, N.; Hardebeck, J.; Mayberry, G. C.; Page, M. T.; Smoczyk, G. M.; Wein, A. M.
2015-12-01
Following the M7.8 Gorhka, Nepal, earthquake of April 25, 2015 the USGS issued a series of aftershock forecasts. The initial impetus for these forecasts was a request from the USAID Office of US Foreign Disaster Assistance to support their Disaster Assistance Response Team (DART) which coordinated US Government disaster response, including search and rescue, with the Government of Nepal. Because of the possible utility of the forecasts to people in the region and other response teams, the USGS released these forecasts publicly through the USGS Earthquake Program web site. The initial forecast used the Reasenberg and Jones (Science, 1989) model with generic parameters developed for active deep continental regions based on the Garcia et al. (BSSA, 2012) tectonic regionalization. These were then updated to reflect a lower productivity and higher decay rate based on the observed aftershocks, although relying on teleseismic observations, with a high magnitude-of-completeness, limited the amount of data. After the 12 May M7.3 aftershock, the forecasts used an Epidemic Type Aftershock Sequence model to better characterize the multiple sources of earthquake clustering. This model provided better estimates of aftershock uncertainty. These forecast messages were crafted based on lessons learned from the Christchurch earthquake along with input from the U.S. Embassy staff in Kathmandu. Challenges included how to balance simple messaging with forecasts over a variety of time periods (week, month, and year), whether to characterize probabilities with words such as those suggested by the IPCC (IPCC, 2010), how to word the messages in a way that would translate accurately into Nepali and not alarm the public, and how to present the probabilities of unlikely but possible large and potentially damaging aftershocks, such as the M7.3 event, which had an estimated probability of only 1-in-200 for the week in which it occurred.
Directory of Open Access Journals (Sweden)
Chien-Chih Chen
2006-01-01
Full Text Available Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the M 7.3 1999 Chi-Chi, Taiwan, earthquake. We show that a previously proposed forecast method that is based on evaluating changes in seismic intensity on a regional basis is superior to a forecast based only on the magnitude of seismic intensity in the same region. Our results confirm earlier suggestions that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous activation or quiescence, and that signatures of these processes can be detected in seismicity data using appropriate methods.
Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.
2012-12-01
Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP
萩原, 由訓; 源栄, 正人; 三辻, 和弥; 野畑, 有秀; Yoshinori, HAGIWARA; Masato, MOTOSAKA; Kazuya, MITSUJI; Arihide, NOBATA; (株)大林組 技術研究所; 東北大学大学院工学研究科; 山形大学地域教育文化学部生活総合学科生活環境科学コース; (株)大林組 技術研究所; Obayashi Corporation Technical Research Institute; Graduate School of Eng., Tohoku University; Faculty of Education, Art and Science, Yamagata University
2011-01-01
The Japan Meteorological Agency(JMA) provides Earthquake Early Warnings(EEW) for advanced users from August 1, 2006. Advanced EEW users can forecaste seismic ground motion (example: Seismic Intensity, Peak Ground Acceleration) from information of the earthquake in EEW. But there are limits to the accuracy and the earliness of the forecasting. This paper describes regression equation to decrease the error and to increase rapidity of the forecast of ground motion parameters from Real Time Earth...
Parsons, Thomas E.; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Edward; Toda, Shinji; Stein, Ross S.
2014-01-01
We calculate stress changes resulting from the M= 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.
yoder, M. R.; Rundle, J. B.; Turcotte, D. L.
2012-12-01
The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard
Directory of Open Access Journals (Sweden)
Jiancang Zhuang
2012-07-01
Full Text Available Based on the ETAS (epidemic-type aftershock sequence model, which is used for describing the features of short-term clustering of earthquake occurrence, this paper presents some theories and techniques related to evaluating the probability distribution of the maximum magnitude in a given space-time window, where the Gutenberg-Richter law for earthquake magnitude distribution cannot be directly applied. It is seen that the distribution of the maximum magnitude in a given space-time volume is determined in the longterm by the background seismicity rate and the magnitude distribution of the largest events in each earthquake cluster. The techniques introduced were applied to the seismicity in the Japan region in the period from 1926 to 2009. It was found that the regions most likely to have big earthquakes are along the Tohoku (northeastern Japan Arc and the Kuril Arc, both with much higher probabilities than the offshore Nankai and Tokai regions.
Method for forecasting an earthquake from precursor signals
International Nuclear Information System (INIS)
Farnworth, D.F.
1996-01-01
A method for forecasting an earthquake from precursor signals by employing characteristic first electromagnetic signals, second, seismically induced electromagnetic signals, seismically induced mechanical signals, and infrasonic acoustic signals which have been observed to precede an earthquake. From a first electromagnetic signal, a magnitude, depth beneath the surface of the earth, distance, latitude, longitude, and first and second forecasts of the time of occurrence of the impending earthquake may be derived. From a second, seismically induced electromagnetic signal and the mechanical signal, third and fourth forecasts of the time of occurrence of an impending earthquake determined from the analysis above, a magnitude, depth beneath the surface of the earth and fourth and fifth forecasts of the time of occurrence of the impending earthquake may be derived. The forecasts of time available from the above analyses range from up to five weeks to substantially within one hour in advance of the earthquake. (author)
Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)
Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.
2013-12-01
Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.
Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F
2011-10-04
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
Applications of the gambling score in evaluating earthquake predictions and forecasts
Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2010-05-01
This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.
Prospective testing of Coulomb short-term earthquake forecasts
Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.
2009-12-01
Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of
Statistical physics approach to earthquake occurrence and forecasting
Energy Technology Data Exchange (ETDEWEB)
Arcangelis, Lucilla de [Department of Industrial and Information Engineering, Second University of Naples, Aversa (CE) (Italy); Godano, Cataldo [Department of Mathematics and Physics, Second University of Naples, Caserta (Italy); Grasso, Jean Robert [ISTerre, IRD-CNRS-OSUG, University of Grenoble, Saint Martin d’Héres (France); Lippiello, Eugenio, E-mail: eugenio.lippiello@unina2.it [Department of Mathematics and Physics, Second University of Naples, Caserta (Italy)
2016-04-25
There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space–time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for
Fractals and Forecasting in Earthquakes and Finance
Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.
2011-12-01
It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)
Varenna workshop report. Operational earthquake forecasting and decision making
Directory of Open Access Journals (Sweden)
Warner Marzocchi
2015-09-01
Full Text Available A workshop on Operational earthquake forecasting and decision making was convened in Varenna, Italy, on June 8-11, 2014, under the sponsorship of the EU FP 7 REAKT (Strategies and tools for Real-time EArthquake risK reducTion project, the Seismic Hazard Center at the Istituto Nazionale di Geofisica e Vulcanologia (INGV, and the Southern California Earthquake Center (SCEC. The main goal was to survey the interdisciplinary issues of operational earthquake forecasting (OEF, including the problems that OEF raises for decision making and risk communication. The workshop was attended by 64 researchers from universities, research centers, and governmental institutions in 11 countries. Participants and the workshop agenda are listed in the appendix.The workshop comprised six topical sessions structured around three main themes: the science of operational earthquake forecasting, decision making in a low-probability environment, and communicating hazard and risk. Each topic was introduced by a moderator and surveyed by a few invited speakers, who were then empaneled for an open discussion. The presentations were followed by poster sessions. During a wrap-up session on the last day, the reporters for each topical session summarized the main points that they had gleaned from the talks and open discussions. This report attempts to distill this workshop record into a brief overview of the workshop themes and to describe the range of opinions expressed during the discussions.
Earthquake focal mechanism forecasting in Italy for PSHA purposes
Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola
2018-01-01
In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.
Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment
Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection
2011-12-01
Operational earthquake forecasting (OEF) is the dissemination of authoritative information about the time dependence of seismic hazards to help communities prepare for potentially destructive earthquakes. Most previous work on the public utility of OEF has anticipated that forecasts would deliver high probabilities of large earthquakes; i.e., deterministic predictions with low error rates (false alarms and failures-to-predict) would be possible. This expectation has not been realized. An alternative to deterministic prediction is probabilistic forecasting based on empirical statistical models of aftershock triggering and seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains in excess of 100 relative to long-term forecasts. The utility of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing OEF in this sort of "low-probability environment." The need to move more quickly has been underscored by recent seismic crises, such as the 2009 L'Aquila earthquake sequence, in which an anxious public was confused by informal and inaccurate earthquake predictions. After the L'Aquila earthquake, the Italian Department of Civil Protection appointed an International Commission on Earthquake Forecasting (ICEF), which I chaired, to recommend guidelines for OEF utilization. Our report (Ann. Geophys., 54, 4, 2011; doi: 10.4401/ag-5350) concludes: (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and need to convey epistemic uncertainties. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. (c) All operational models should be evaluated
Earthquake forecasting studies using radon time series data in Taiwan
Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong
2017-04-01
For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.
Lessons of L'Aquila for Operational Earthquake Forecasting
Jordan, T. H.
2012-12-01
The L'Aquila earthquake of 6 Apr 2009 (magnitude 6.3) killed 309 people and left tens of thousands homeless. The mainshock was preceded by a vigorous seismic sequence that prompted informal earthquake predictions and evacuations. In an attempt to calm the population, the Italian Department of Civil Protection (DPC) convened its Commission on the Forecasting and Prevention of Major Risk (MRC) in L'Aquila on 31 March 2009 and issued statements about the hazard that were widely received as an "anti-alarm"; i.e., a deterministic prediction that there would not be a major earthquake. On October 23, 2012, a court in L'Aquila convicted the vice-director of DPC and six scientists and engineers who attended the MRC meeting on charges of criminal manslaughter, and it sentenced each to six years in prison. A few weeks after the L'Aquila disaster, the Italian government convened an International Commission on Earthquake Forecasting for Civil Protection (ICEF) with the mandate to assess the status of short-term forecasting methods and to recommend how they should be used in civil protection. The ICEF, which I chaired, issued its findings and recommendations on 2 Oct 2009 and published its final report, "Operational Earthquake Forecasting: Status of Knowledge and Guidelines for Implementation," in Aug 2011 (www.annalsofgeophysics.eu/index.php/annals/article/view/5350). As defined by the Commission, operational earthquake forecasting (OEF) involves two key activities: the continual updating of authoritative information about the future occurrence of potentially damaging earthquakes, and the officially sanctioned dissemination of this information to enhance earthquake preparedness in threatened communities. Among the main lessons of L'Aquila is the need to separate the role of science advisors, whose job is to provide objective information about natural hazards, from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms
On one approach to an earthquakes forecasting problems solution
International Nuclear Information System (INIS)
Khugaev, A.V.; Koblik, Yu.N.; Rakhmanov, T.T.
2007-01-01
The problem of earthquake forecasting is practically important one but it is extremely complex and so it does not solved yet. In the report the problem of data analysis obtained in measurements of radioactive gases emission (for example, radon, thoron, action) from the earth surface, data in magnetic fields anomalies measurement and their correlation in accordance of seismic activity is considered. In a general case the problem has an unlikely total solution in an analytic meaning due to it nonlinearity, multi-parametration and influence of random factors. It is suggested that useful solution could be found only at reasonable combination of empiric knowledge got at a long observations, its generalization and numerical simulation. In the base of the offered calculation method the correlation analysis between seismic activity and , for example, radioactive gases emission variations of earthquake signs one can present in form of two components, one of which is regular component, and the second one is irregular one. The key interest presents the analysis of irregular component reasoned by random factors. At problem solution of irregular component of the Earth magnetic fields determination which with high precise could measured with help of magnetic sensors is determined. At that time in the base of mathematical apparatus for analysis the approach for irregular component determination applied at determination of irregular component of galactic magnet field. Hear values of irregular component and field size in which they play considerable role are obtained. Besides, the work the approach allowing solving problem about complex surface oscillation with necessary precision for practical requirements is discussed
Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.
2017-01-01
We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as
Spatial organization of foreshocks as a tool to forecast large earthquakes.
Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C
2012-01-01
An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.
Retrospective Evaluation of the Five-Year and Ten-Year CSEP-Italy Earthquake Forecasts
Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.
2010-01-01
On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten...
Norbeck, J. H.; Rubinstein, J. L.
2017-12-01
The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in
Retrospective evaluation of the five-year and ten-year CSEP-Italy earthquake forecasts
Directory of Open Access Journals (Sweden)
Stefan Wiemer
2010-11-01
Full Text Available On August 1, 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP launched a prospective and comparative earthquake predictability experiment in Italy. The goal of this CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented 18 five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We have considered here the twelve time-independent earthquake forecasts among this set, and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. We present the results of the tests that measure the consistencies of the forecasts according to past observations. As well as being an evaluation of the time-independent forecasts submitted, this exercise provides insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between robustness of results and experiment duration. We conclude with suggestions for the design of future earthquake predictability experiments.
Retrospective Evaluation of the Long-Term CSEP-Italy Earthquake Forecasts
Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.
2010-12-01
On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We considered the twelve time-independent earthquake forecasts among this set and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. Here, we present the results of tests that measure the consistency of the forecasts with the past observations. Besides being an evaluation of the submitted time-independent forecasts, this exercise provided insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between the robustness of results and experiment duration.
A way to synchronize models with seismic faults for earthquake forecasting
DEFF Research Database (Denmark)
González, Á.; Gómez, J.B.; Vázquez-Prada, M.
2006-01-01
Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...
Michael, A. J.; Field, E. H.; Hardebeck, J.; Llenos, A. L.; Milner, K. R.; Page, M. T.; Perry, S. C.; van der Elst, N.; Wein, A. M.
2016-12-01
After the Mw 5.8 Pawnee, Oklahoma, earthquake of September 3, 2016 the USGS issued a series of aftershock forecasts for the next month and year. These forecasts were aimed at the emergency response community, those making decisions about well operations in the affected region, and the general public. The forecasts were generated manually using methods planned for automatically released Operational Aftershock Forecasts. The underlying method is from Reasenberg and Jones (Science, 1989) with improvements recently published in Page et al. (BSSA, 2016), implemented in a JAVA Graphical User Interface and presented in a template that is under development. The methodological improvements include initial models based on the tectonic regime as defined by Garcia et al. (BSSA, 2012) and the inclusion of both uncertainty in the clustering parameters and natural random variability. We did not utilize the time-dependent magnitude of completeness model from Page et al. because it applies only to teleseismic events recorded by NEIC. The parameters for Garcia's Generic Active Continental Region underestimated the modified-Omori decay parameter and underestimated the aftershock rate by a factor of 2. And the sequence following the Mw 5.7 Prague, Oklahoma, earthquake of November 6, 2011 was about 3 to 4 times more productive than the Pawnee sequence. The high productivity for these potentially induced sequences is consistent with an increase in productivity in Oklahoma since 2009 (Llenos and Michael, BSSA, 2013) and makes a general tectonic model inapplicable to sequences in this region. Soon after the mainshock occurred, the forecasts relied on the sequence specific parameters. After one month, the Omori decay parameter p is less than one, implying a very long-lived sequence. However, the decay parameter is known to be biased low at early times due to secondary aftershock triggering, and the p-value determined early in the sequence may be inaccurate for long-term forecasting.
Ergodicity and Phase Transitions and Their Implications for Earthquake Forecasting.
Klein, W.
2017-12-01
Forecasting earthquakes or even predicting the statistical distribution of events on a given fault is extremely difficult. One reason for this difficulty is the large number of fault characteristics that can affect the distribution and timing of events. The range of stress transfer, the level of noise, and the nature of the friction force all influence the type of the events and the values of these parameters can vary from fault to fault and also vary with time. In addition, the geometrical structure of the faults and the correlation of events on different faults plays an important role in determining the event size and their distribution. Another reason for the difficulty is that the important fault characteristics are not easily measured. The noise level, fault structure, stress transfer range, and the nature of the friction force are extremely difficult, if not impossible to ascertain. Given this lack of information, one of the most useful approaches to understanding the effect of fault characteristics and the way they interact is to develop and investigate models of faults and fault systems.In this talk I will present results obtained from a series of models of varying abstraction and compare them with data from actual faults. We are able to provide a physical basis for several observed phenomena such as the earthquake cycle, thefact that some faults display Gutenburg-Richter scaling and others do not, and that some faults exhibit quasi-periodic characteristic events and others do not. I will also discuss some surprising results such as the fact that some faults are in thermodynamic equilibrium depending on the stress transfer range and the noise level. An example of an important conclusion that can be drawn from this work is that the statistical distribution of earthquake events can vary from fault to fault and that an indication of an impending large event such as accelerating moment release may be relevant on some faults but not on others.
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
UCERF3: A new earthquake forecast for California's complex fault system
Field, Edward H.; ,
2015-01-01
With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.
Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.
2012-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast
Directory of Open Access Journals (Sweden)
Francesco Visini
2010-11-01
Full Text Available The Collaboratory for the Study of Earthquake Predictability (CSEP selected Italy as a testing region for probabilistic earthquake forecast models in October, 2008. The model we have submitted for the two medium-term forecast periods of 5 and 10 years (from 2009 is a time-dependent, geologically based earthquake rupture forecast that is defined for central Italy only (11-15˚ E; 41-45˚ N. The model took into account three separate layers of seismogenic sources: background seismicity; seismotectonic provinces; and individual faults that can produce major earthquakes (seismogenic boxes. For CSEP testing purposes, the background seismicity layer covered a range of magnitudes from 5.0 to 5.3 and the seismicity rates were obtained by truncated Gutenberg-Richter relationships for cells centered on the CSEP grid. Then the seismotectonic provinces layer returned the expected rates of medium-to-large earthquakes following a traditional Cornell-type approach. Finally, for the seismogenic boxes layer, the rates were based on the geometry and kinematics of the faults that different earthquake recurrence models have been assigned to, ranging from pure Gutenberg-Richter behavior to characteristic events, with the intermediate behavior named as the hybrid model. The results for different magnitude ranges highlight the contribution of each of the three layers to the total computation. The expected rates for M >6.0 on April 1, 2009 (thus computed before the L'Aquila, 2009, MW= 6.3 earthquake are of particular interest. They showed local maxima in the two seismogenic-box sources of Paganica and Sulmona, one of which was activated by the L'Aquila earthquake of April 6, 2009. Earthquake rates as of August 1, 2009, (now under test also showed a maximum close to the Sulmona source for MW ~6.5; significant seismicity rates (10-4 to 10-3 in 5 years for destructive events (magnitude up to 7.0 were located in other individual sources identified as being capable of such
Rundle, J. B.
2017-12-01
Earthquakes and financial markets share surprising similarities [1]. For example, the well-known VIX index, which by definition is the implied volatility of the Standard and Poors 500 index, behaves in very similar quantitative fashion to time series for earthquake rates. Both display sudden increases at the time of an earthquake or an announcement of the US Federal Reserve Open Market Committee [2], and both decay as an inverse power of time. Both can be regarded as examples of first order phase transitions [1], and display fractal and scaling behavior associated with critical transitions, such as power-law magnitude-frequency relations in the tails of the distributions. Early quantitative investors such as Edward Thorpe and John Kelly invented novel methods to mitigate or manage risk in games of chance such as blackjack, and in markets using hedging techniques that are still in widespread use today. The basic idea is the concept of proportional betting, where the gambler/investor bets a fraction of the bankroll whose size is determined by the "edge" or inside knowledge of the real (and changing) odds. For earthquake systems, the "edge" over nature can only exist in the form of a forecast (probability of a future earthquake); a nowcast (knowledge of the current state of an earthquake fault system); or a timecast (statistical estimate of the waiting time until the next major earthquake). In our terminology, a forecast is a model, while the nowcast and timecast are analysis methods using observed data only (no model). We also focus on defined geographic areas rather than on faults, thereby eliminating the need to consider specific fault data or fault interactions. Data used are online earthquake catalogs, generally since 1980. Forecasts are based on the Weibull (1952) probability law, and only a handful of parameters are needed. These methods allow the development of real time hazard and risk estimation using cloud-based technologies, and permit the application of
Earthquake data base for Romania
International Nuclear Information System (INIS)
Rizescu, M.; Ghica, D.; Grecu, B.; Popa, M.; Borcia, I. S.
2002-01-01
A new earthquake database for Romania is being constructed, comprising complete earthquake information and being up-to-date, user-friendly and rapidly accessible. One main component of the database consists from the catalog of earthquakes occurred in Romania since 984 up to present. The catalog contains information related to locations and other source parameters, when available, and links to waveforms of important earthquakes. The other very important component is the 'strong motion database', developed for strong intermediate-depth Vrancea earthquakes where instrumental data were recorded. Different parameters to characterize strong motion properties as: effective peak acceleration, effective peak velocity, corner periods T c and T d , global response spectrum based intensities were computed and recorded into this database. Also, information on the recording seismic stations as: maps giving their positioning, photographs of the instruments and site conditions ('free-field or on buildings) are included. By the huge volume and quality of gathered data, also by its friendly user interface, the Romania earthquake data base provides a very useful tool for geosciences and civil engineering in their effort towards reducing seismic risk in Romania. (authors)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2015-04-01
Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.
Baddari, Kamel; Makdeche, Said; Bellalem, Fouzi
2013-02-01
Based on the moment magnitude scale, a probabilistic model was developed to predict the occurrences of strong earthquakes in the seismoactive area of Zemmouri, Algeria. Firstly, the distributions of earthquake magnitudes M i were described using the distribution function F 0(m), which adjusts the magnitudes considered as independent random variables. Secondly, the obtained result, i.e., the distribution function F 0(m) of the variables M i was used to deduce the distribution functions G(x) and H(y) of the variables Y i = Log M 0,i and Z i = M 0,i , where (Y i)i and (Z i)i are independent. Thirdly, some forecast for moments of the future earthquakes in the studied area is given.
Failures and suggestions in Earthquake forecasting and prediction
Sacks, S. I.
2013-12-01
Seismologists have had poor success in earthquake prediction. However, wide ranging observations from earlier great earthquakes show that precursory data can exist. In particular, two aspects seem promising. In agreement with simple physical modeling, b-values decrease in highly loaded fault zones for years before failure. Potentially more usefully, in high stress regions, breakdown of dilatant patches leading to failure can yield expelled water-related observations. The volume increase (dilatancy) caused by high shear stresses decreases the pore pressure. Eventually, water flows back in restoring the pore pressure, promoting failure and expelling the extra water. Of course, in a generally stressed region there may be many small patches that fail, such as observed before the 1975 Haicheng earthquake. Only a few days before the major event will most of the dilatancy breakdown occur in the fault zone itself such as for the Tangshan, 1976 destructive event. Observations of 'water release' effects have been observed before the 1923 great Kanto earthquake, the 1984 Yamasaki event, the 1975 Haicheng and the 1976 Tangshan earthquakes and also the 1995 Kobe earthquake. While there are obvious difficulties in water release observations, not least because there is currently no observational network anywhere, historical data does suggest some promise if we broaden our approach to this difficult subject.
Zarola, Amit; Sil, Arjun
2018-04-01
This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.
Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast
Directory of Open Access Journals (Sweden)
Jinyin Ye
2016-01-01
Full Text Available TIGGE (THORPEX International Grand Global Ensemble was a major part of the THORPEX (Observing System Research and Predictability Experiment. It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability forecast, extend the lead time of the flood forecast, and gain more time for decision-makers to make the right decision. In this study, precipitation ensemble forecast products from ECMWF, NCEP, and CMA are used to drive distributed hydrologic model TOPX. We focus on Yi River catchment and aim to build a flood forecast and early warning system. The results show that the meteorologic-hydrologic coupled model can satisfactorily predict the flow-process of four flood events. The predicted occurrence time of peak discharges is close to the observations. However, the magnitude of the peak discharges is significantly different due to various performances of the ensemble prediction systems. The coupled forecasting model can accurately predict occurrence of the peak time and the corresponding risk probability of peak discharge based on the probability distribution of peak time and flood warning, which can provide users a strong theoretical foundation and valuable information as a promising new approach.
Donovan, J.; Jordan, T. H.
2012-12-01
Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).
Smartphone-Based Earthquake and Tsunami Early Warning in Chile
Brooks, B. A.; Baez, J. C.; Ericksen, T.; Barrientos, S. E.; Minson, S. E.; Duncan, C.; Guillemot, C.; Smith, D.; Boese, M.; Cochran, E. S.; Murray, J. R.; Langbein, J. O.; Glennie, C. L.; Dueitt, J.; Parra, H.
2016-12-01
Many locations around the world face high seismic hazard, but do not have the resources required to establish traditional earthquake and tsunami warning systems (E/TEW) that utilize scientific grade seismological sensors. MEMs accelerometers and GPS chips embedded in, or added inexpensively to, smartphones are sensitive enough to provide robust E/TEW if they are deployed in sufficient numbers. We report on a pilot project in Chile, one of the most productive earthquake regions world-wide. There, magnitude 7.5+ earthquakes occurring roughly every 1.5 years and larger tsunamigenic events pose significant local and trans-Pacific hazard. The smartphone-based network described here is being deployed in parallel to the build-out of a scientific-grade network for E/TEW. Our sensor package comprises a smartphone with internal MEMS and an external GPS chipset that provides satellite-based augmented positioning and phase-smoothing. Each station is independent of local infrastructure, they are solar-powered and rely on cellular SIM cards for communications. An Android app performs initial onboard processing and transmits both accelerometer and GPS data to a server employing the FinDer-BEFORES algorithm to detect earthquakes, producing an acceleration-based line source model for smaller magnitude earthquakes or a joint seismic-geodetic finite-fault distributed slip model for sufficiently large magnitude earthquakes. Either source model provides accurate ground shaking forecasts, while distributed slip models for larger offshore earthquakes can be used to infer seafloor deformation for local tsunami warning. The network will comprise 50 stations by Sept. 2016 and 100 stations by Dec. 2016. Since Nov. 2015, batch processing has detected, located, and estimated the magnitude for Mw>5 earthquakes. Operational since June, 2016, we have successfully detected two earthquakes > M5 (M5.5, M5.1) that occurred within 100km of our network while producing zero false alarms.
Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Shumway, Allison; McNamara, Daniel E.; Williams, Robert; Llenos, Andrea L.; Ellsworth, William L.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.
2017-01-01
We produce a one‐year 2017 seismic‐hazard forecast for the central and eastern United States from induced and natural earthquakes that updates the 2016 one‐year forecast; this map is intended to provide information to the public and to facilitate the development of induced seismicity forecasting models, methods, and data. The 2017 hazard model applies the same methodology and input logic tree as the 2016 forecast, but with an updated earthquake catalog. We also evaluate the 2016 seismic‐hazard forecast to improve future assessments. The 2016 forecast indicated high seismic hazard (greater than 1% probability of potentially damaging ground shaking in one year) in five focus areas: Oklahoma–Kansas, the Raton basin (Colorado/New Mexico border), north Texas, north Arkansas, and the New Madrid Seismic Zone. During 2016, several damaging induced earthquakes occurred in Oklahoma within the highest hazard region of the 2016 forecast; all of the 21 moment magnitude (M) ≥4 and 3 M≥5 earthquakes occurred within the highest hazard area in the 2016 forecast. Outside the Oklahoma–Kansas focus area, two earthquakes with M≥4 occurred near Trinidad, Colorado (in the Raton basin focus area), but no earthquakes with M≥2.7 were observed in the north Texas or north Arkansas focus areas. Several observations of damaging ground‐shaking levels were also recorded in the highest hazard region of Oklahoma. The 2017 forecasted seismic rates are lower in regions of induced activity due to lower rates of earthquakes in 2016 compared with 2015, which may be related to decreased wastewater injection caused by regulatory actions or by a decrease in unconventional oil and gas production. Nevertheless, the 2017 forecasted hazard is still significantly elevated in Oklahoma compared to the hazard calculated from seismicity before 2009.
Earthquakes and forecast reliability: thermoactivation and mesomechanics of the focal zone
Kalinnikov, I. I.; Manukin, A. B.; Matyunin, V. P.
2017-06-01
According to our data, the involvement of the fundamental laws of physics, in particular, consideration of an earthquake as a particular macroprocess with a peak together with the thermofluctuational activation of mechanical stresses in some environments, makes it possible to move beyond the traditional idea of the issue of earthquake prediction. Many formal parameters of statistical processing of the geophysical data can be provided with a physical sense related to the mesomechanics of structural changes in a stressed solid body. Measures for improving the efficiency of observations and their mathematical processing to solve the forecasting issues have been specified.
Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model
Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,
2013-01-01
In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of
Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes
Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario
2018-02-01
Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.
Data base pertinent to earthquake design basis
International Nuclear Information System (INIS)
Sharma, R.D.
1988-01-01
Mitigation of earthquake risk from impending strong earthquakes is possible provided the hazard can be assessed, and translated into appropriate design inputs. This requires defining the seismic risk problem, isolating the risk factors and quantifying risk in terms of physical parameters, which are suitable for application in design. Like all other geological phenomena, past earthquakes hold the key to the understanding of future ones. Quantificatio n of seismic risk at a site calls for investigating the earthquake aspects of the site region and building a data base. The scope of such investigations is il lustrated in Figure 1 and 2. A more detailed definition of the earthquake problem in engineering design is given elsewhere (Sharma, 1987). The present document discusses the earthquake data base, which is required to support a seismic risk evaluation programme in the context of the existing state of the art. (author). 8 tables, 10 figs., 54 refs
Satellite based Ocean Forecasting, the SOFT project
Stemmann, L.; Tintoré, J.; Moneris, S.
2003-04-01
The knowledge of future oceanic conditions would have enormous impact on human marine related areas. For such reasons, a number of international efforts are being carried out to obtain reliable and manageable ocean forecasting systems. Among the possible techniques that can be used to estimate the near future states of the ocean, an ocean forecasting system based on satellite imagery is developped through the Satelitte based Ocean ForecasTing project (SOFT). SOFT, established by the European Commission, considers the development of a forecasting system of the ocean space-time variability based on satellite data by using Artificial Intelligence techniques. This system will be merged with numerical simulation approaches, via assimilation techniques, to get a hybrid SOFT-numerical forecasting system of improved performance. The results of the project will provide efficient forecasting of sea-surface temperature structures, currents, dynamic height, and biological activity associated to chlorophyll fields. All these quantities could give valuable information on the planning and management of human activities in marine environments such as navigation, fisheries, pollution control, or coastal management. A detailed identification of present or new needs and potential end-users concerned by such an operational tool is being performed. The project would study solutions adapted to these specific needs.
Short-term wind power combined forecasting based on error forecast correction
International Nuclear Information System (INIS)
Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng
2016-01-01
Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed
Sellnow, D. D.; Sellnow, T. L.
2017-12-01
Earthquake scientists are without doubt experts in understanding earthquake probabilities, magnitudes, and intensities, as well as the potential consequences of them to community infrastructures and inhabitants. One critical challenge these scientific experts face, however, rests with communicating what they know to the people they want to help. Helping scientists translate scientific information to non-scientists is something Drs. Tim and Deanna Sellnow have been committed to for decades. As such, they have compiled a host of data-driven best practices for communicating effectively to non-scientific publics about earthquake forecasting, probabilities, and warnings. In this session, they will summarize what they have learned as it may help earthquake scientists, emergency managers, and other key spokespersons share these important messages to disparate publics in ways that result in positive outcomes, the most important of which is saving lives.
Combination of biased forecasts: Bias correction or bias based weights?
Wenzel, Thomas
1999-01-01
Most of the literature on combination of forecasts deals with the assumption of unbiased individual forecasts. Here, we consider the case of biased forecasts and discuss two different combination techniques resulting in an unbiased forecast. On the one hand we correct the individual forecasts, and on the other we calculate bias based weights. A simulation study gives some insight in the situations where we should use the different methods.
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...
Nomura, Shunichi; Ogata, Yosihiko
2016-04-01
We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.
Earthquake forecast for the Wasatch Front region of the Intermountain West
DuRoss, Christopher B.
2016-04-18
The Working Group on Utah Earthquake Probabilities has assessed the probability of large earthquakes in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater earthquakes and a 57 percent probability of one or more magnitude 6.0 or greater earthquakes in the region in the next 50 years. These results highlight the threat of large earthquakes in the region.
Ebrahimian, Hossein; Jalayer, Fatemeh
2017-08-29
In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.
Staged decision making based on probabilistic forecasting
Booister, Nikéh; Verkade, Jan; Werner, Micha; Cranston, Michael; Cumiskey, Lydia; Zevenbergen, Chris
2016-04-01
Flood forecasting systems reduce, but cannot eliminate uncertainty about the future. Probabilistic forecasts explicitly show that uncertainty remains. However, as - compared to deterministic forecasts - a dimension is added ('probability' or 'likelihood'), with this added dimension decision making is made slightly more complicated. A technique of decision support is the cost-loss approach, which defines whether or not to issue a warning or implement mitigation measures (risk-based method). With the cost-loss method a warning will be issued when the ratio of the response costs to the damage reduction is less than or equal to the probability of the possible flood event. This cost-loss method is not widely used, because it motivates based on only economic values and is a technique that is relatively static (no reasoning, yes/no decision). Nevertheless it has high potential to improve risk-based decision making based on probabilistic flood forecasting because there are no other methods known that deal with probabilities in decision making. The main aim of this research was to explore the ways of making decision making based on probabilities with the cost-loss method better applicable in practice. The exploration began by identifying other situations in which decisions were taken based on uncertain forecasts or predictions. These cases spanned a range of degrees of uncertainty: from known uncertainty to deep uncertainty. Based on the types of uncertainties, concepts of dealing with situations and responses were analysed and possible applicable concepts where chosen. Out of this analysis the concepts of flexibility and robustness appeared to be fitting to the existing method. Instead of taking big decisions with bigger consequences at once, the idea is that actions and decisions are cut-up into smaller pieces and finally the decision to implement is made based on economic costs of decisions and measures and the reduced effect of flooding. The more lead-time there is in
Radar Based Flow and Water Level Forecasting in Sewer Systems
DEFF Research Database (Denmark)
Thorndahl, Søren; Rasmussen, Michael R.; Grum, M.
2009-01-01
This paper describes the first radar based forecast of flow and/or water level in sewer systems in Denmark. The rainfall is successfully forecasted with a lead time of 1-2 hours, and flow/levels are forecasted an additional ½-1½ hours using models describing the behaviour of the sewer system. Bot...
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
Action-based flood forecasting for triggering humanitarian action
Coughlan de Perez, Erin; van den Hurk, Bart; van Aalst, Maarten K.; Amuron, Irene; Bamanya, Deus; Hauser, Tristan; Jongma, Brenden; Lopez, Ana; Mason, Simon; Mendler de Suarez, Janot; Pappenberger, Florian; Rueth, Alexandra; Stephens, Elisabeth; Suarez, Pablo; Wagemaker, Jurjen; Zsoter, Ervin
2016-09-01
Too often, credible scientific early warning information of increased disaster risk does not result in humanitarian action. With financial resources tilted heavily towards response after a disaster, disaster managers have limited incentive and ability to process complex scientific data, including uncertainties. These incentives are beginning to change, with the advent of several new forecast-based financing systems that provide funding based on a forecast of an extreme event. Given the changing landscape, here we demonstrate a method to select and use appropriate forecasts for specific humanitarian disaster prevention actions, even in a data-scarce location. This action-based forecasting methodology takes into account the parameters of each action, such as action lifetime, when verifying a forecast. Forecasts are linked with action based on an understanding of (1) the magnitude of previous flooding events and (2) the willingness to act "in vain" for specific actions. This is applied in the context of the Uganda Red Cross Society forecast-based financing pilot project, with forecasts from the Global Flood Awareness System (GloFAS). Using this method, we define the "danger level" of flooding, and we select the probabilistic forecast triggers that are appropriate for specific actions. Results from this methodology can be applied globally across hazards and fed into a financing system that ensures that automatic, pre-funded early action will be triggered by forecasts.
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, S.A.; Tan, T.L.; Chen, C.-C.; Chang, W.-L.; Liu, Z.; Chew, L.Y.; Sloot, P.M.A.; Johnson, N.F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting
Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.
2017-12-01
Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.
Demand forecast model based on CRM
Cai, Yuancui; Chen, Lichao
2006-11-01
With interiorizing day by day management thought that regarding customer as the centre, forecasting customer demand becomes more and more important. In the demand forecast of customer relationship management, the traditional forecast methods have very great limitation because much uncertainty of the demand, these all require new modeling to meet the demands of development. In this paper, the notion is that forecasting the demand according to characteristics of the potential customer, then modeling by it. The model first depicts customer adopting uniform multiple indexes. Secondly, the model acquires characteristic customers on the basis of data warehouse and the technology of data mining. The last, there get the most similar characteristic customer by their comparing and forecast the demands of new customer by the most similar characteristic customer.
Weather forecasting based on hybrid neural model
Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.
2017-11-01
Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions
White, Randall; McCausland, Wendy
2016-01-01
We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations
Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model
Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza
2017-08-01
Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.
Dynamic evaluation of seismic hazard and risks based on the Unified Scaling Law for Earthquakes
Kossobokov, V. G.; Nekrasova, A.
2016-12-01
We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A + B•(6 - M) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L, A characterizes the average annual rate of strong (M = 6) earthquakes, B determines the balance between magnitude ranges, and C estimates the fractal dimension of seismic locus in projection to the Earth surface. The parameters A, B, and C of USLE are used to assess, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity or paleo data), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures. The hazard maps for a given territory change dramatically, when the methodology is applied to a certain size moving time window, e.g. about a decade long for an intermediate-term regional assessment or exponentially increasing intervals for a daily local strong aftershock forecasting. The of dynamical seismic hazard and risks assessment is illustrated by applications to the territory of Greater Caucasus and Crimea and the two-year series of aftershocks of the 11 October 2008 Kurchaloy, Chechnya earthquake which case-history appears to be encouraging for further systematic testing as potential short-term forecasting tool.
Li, Xiang; He, Hongrang; Chen, Chaohui; Miao, Ziqing; Bai, Shigang
2017-10-01
A convection-allowing ensemble forecast experiment on a squall line was conducted based on the breeding growth mode (BGM). Meanwhile, the probability matched mean (PMM) and neighborhood ensemble probability (NEP) methods were used to optimize the associated precipitation forecast. The ensemble forecast predicted the precipitation tendency accurately, which was closer to the observation than in the control forecast. For heavy rainfall, the precipitation center produced by the ensemble forecast was also better. The Fractions Skill Score (FSS) results indicated that the ensemble mean was skillful in light rainfall, while the PMM produced better probability distribution of precipitation for heavy rainfall. Preliminary results demonstrated that convection-allowing ensemble forecast could improve precipitation forecast skill through providing valuable probability forecasts. It is necessary to employ new methods, such as the PMM and NEP, to generate precipitation probability forecasts. Nonetheless, the lack of spread and the overprediction of precipitation by the ensemble members are still problems that need to be solved.
Operational forecasting based on a modified Weather Research and Forecasting model
Energy Technology Data Exchange (ETDEWEB)
Lundquist, J; Glascoe, L; Obrecht, J
2010-03-18
Accurate short-term forecasts of wind resources are required for efficient wind farm operation and ultimately for the integration of large amounts of wind-generated power into electrical grids. Siemens Energy Inc. and Lawrence Livermore National Laboratory, with the University of Colorado at Boulder, are collaborating on the design of an operational forecasting system for large wind farms. The basis of the system is the numerical weather prediction tool, the Weather Research and Forecasting (WRF) model; large-eddy simulations and data assimilation approaches are used to refine and tailor the forecasting system. Representation of the atmospheric boundary layer is modified, based on high-resolution large-eddy simulations of the atmospheric boundary. These large-eddy simulations incorporate wake effects from upwind turbines on downwind turbines as well as represent complex atmospheric variability due to complex terrain and surface features as well as atmospheric stability. Real-time hub-height wind speed and other meteorological data streams from existing wind farms are incorporated into the modeling system to enable uncertainty quantification through probabilistic forecasts. A companion investigation has identified optimal boundary-layer physics options for low-level forecasts in complex terrain, toward employing decadal WRF simulations to anticipate large-scale changes in wind resource availability due to global climate change.
Vesicular stomatitis forecasting based on Google Trends.
Wang, JianYing; Zhang, Tong; Lu, Yi; Zhou, GuangYa; Chen, Qin; Niu, Bing
2018-01-01
Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast.
Vesicular stomatitis forecasting based on Google Trends
Lu, Yi; Zhou, GuangYa; Chen, Qin
2018-01-01
Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198
Vesicular stomatitis forecasting based on Google Trends.
Directory of Open Access Journals (Sweden)
JianYing Wang
Full Text Available Vesicular stomatitis (VS is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends.American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression.For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity, SP (specificity and ACC (prediction accuracy values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively.This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast.
Earthquake response analysis of a base isolated building
International Nuclear Information System (INIS)
Mazda, T.; Shiojiri, H.; Sawada, Y.; Harada, O.; Kawai, N.; Ontsuka, S.
1989-01-01
Recently, the seismic isolation has become one of the popular methods in the design of important structures or equipments against the earthquakes. However, it is desired to accumulate the demonstration data on reliability of seismically isolated structures and to establish the analysis methods of those structures. Based on the above recognition, the vibration tests of a base isolated building were carried out in Tsukuba Science City. After that, many earthquake records have been obtained at the building. In order to examine the validity of numerical models, earthquake response analyses have been executed by using both lumped mass model, and finite element model
Biasi, Glenn; Scharer, Katherine M.; Weldon, Ray; Dawson, Timothy E.
2016-01-01
The 98-year open interval since the most recent ground-rupturing earthquake in the greater San Andreas boundary fault system would not be predicted by the quasi-periodic recurrence statistics from paleoseismic data. We examine whether the current hiatus could be explained by uncertainties in earthquake dating. Using seven independent paleoseismic records, 100 year intervals may have occurred circa 1150, 1400, and 1700 AD, but they occur in a third or less of sample records drawn at random. A second method sampling from dates conditioned on the existence of a gap of varying length suggests century-long gaps occur 3-10% of the time. A combined record with more sites would lead to lower probabilities. Systematic data over-interpretation is considered an unlikely explanation. Instead some form of non-stationary behaviour seems required, perhaps through long-range fault interaction. Earthquake occurrence since 1000 AD is not inconsistent with long-term cyclicity suggested from long runs of earthquake simulators.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the
Geomagnetic Dst index forecast based on IMF data only
Directory of Open Access Journals (Sweden)
G. Pallocchia
2006-05-01
Full Text Available In the past years several operational Dst forecasting algorithms, based on both IMF and solar wind plasma parameters, have been developed and used. We describe an Artificial Neural Network (ANN algorithm which calculates the Dst index on the basis of IMF data only and discuss its performance for several individual storms. Moreover, we briefly comment on the physical grounds which allow the Dst forecasting based on IMF only.
Earthquake insurance pricing: a risk-based approach.
Lin, Jeng-Hsiang
2018-04-01
Flat earthquake premiums are 'uniformly' set for a variety of buildings in many countries, neglecting the fact that the risk of damage to buildings by earthquakes is based on a wide range of factors. How these factors influence the insurance premiums is worth being studied further. Proposed herein is a risk-based approach to estimate the earthquake insurance rates of buildings. Examples of application of the approach to buildings located in Taipei city of Taiwan were examined. Then, the earthquake insurance rates for the buildings investigated were calculated and tabulated. To fulfil insurance rating, the buildings were classified into 15 model building types according to their construction materials and building height. Seismic design levels were also considered in insurance rating in response to the effect of seismic zone and construction years of buildings. This paper may be of interest to insurers, actuaries, and private and public sectors of insurance. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.
Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs
Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan
2016-04-01
Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more
Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide
Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.
2017-12-01
GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.
Generalization of information-based concepts in forecast verification
Tödter, J.; Ahrens, B.
2012-04-01
This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.
Deep Neural Network Based Demand Side Short Term Load Forecasting
Directory of Open Access Journals (Sweden)
Seunghyoung Ryu
2016-12-01
Full Text Available In the smart grid, one of the most important research areas is load forecasting; it spans from traditional time series analyses to recent machine learning approaches and mostly focuses on forecasting aggregated electricity consumption. However, the importance of demand side energy management, including individual load forecasting, is becoming critical. In this paper, we propose deep neural network (DNN-based load forecasting models and apply them to a demand side empirical load database. DNNs are trained in two different ways: a pre-training restricted Boltzmann machine and using the rectified linear unit without pre-training. DNN forecasting models are trained by individual customer’s electricity consumption data and regional meteorological elements. To verify the performance of DNNs, forecasting results are compared with a shallow neural network (SNN, a double seasonal Holt–Winters (DSHW model and the autoregressive integrated moving average (ARIMA. The mean absolute percentage error (MAPE and relative root mean square error (RRMSE are used for verification. Our results show that DNNs exhibit accurate and robust predictions compared to other forecasting models, e.g., MAPE and RRMSE are reduced by up to 17% and 22% compared to SNN and 9% and 29% compared to DSHW.
ECONOMIC FORECASTS BASED ON ECONOMETRIC MODELS USING EViews 5
Directory of Open Access Journals (Sweden)
Cornelia TomescuDumitrescu,
2009-05-01
Full Text Available The forecast of evolution of economic phenomena represent on the most the final objective of econometrics. It withal represent a real attempt of validity elaborate model. Unlike the forecasts based on the study of temporal series which have an recognizable inertial character the forecasts generated by econometric model with simultaneous equations are after to contour the future of ones of important economic variables toward the direct and indirect influences bring the bear on their about exogenous variables. For the relief of the calculus who the realization of the forecasts based on the econometric models its suppose is indicate the use of the specialized informatics programs. One of this is the EViews which is applied because it reduces significant the time who is destined of the econometric analysis and it assure a high accuracy of calculus and of the interpretation of results.
Modern earthquake engineering offshore and land-based structures
Jia, Junbo
2017-01-01
This book addresses applications of earthquake engineering for both offshore and land-based structures. It is self-contained as a reference work and covers a wide range of topics, including topics related to engineering seismology, geotechnical earthquake engineering, structural engineering, as well as special contents dedicated to design philosophy, determination of ground motions, shock waves, tsunamis, earthquake damage, seismic response of offshore and arctic structures, spatial varied ground motions, simplified and advanced seismic analysis methods, sudden subsidence of offshore platforms, tank liquid impacts during earthquakes, seismic resistance of non-structural elements, and various types of mitigation measures, etc. The target readership includes professionals in offshore and civil engineering, officials and regulators, as well as researchers and students in this field.
Directory of Open Access Journals (Sweden)
Yuqi Dong
2016-12-01
Full Text Available Accurate short-term electrical load forecasting plays a pivotal role in the national economy and people’s livelihood through providing effective future plans and ensuring a reliable supply of sustainable electricity. Although considerable work has been done to select suitable models and optimize the model parameters to forecast the short-term electrical load, few models are built based on the characteristics of time series, which will have a great impact on the forecasting accuracy. For that reason, this paper proposes a hybrid model based on data decomposition considering periodicity, trend and randomness of the original electrical load time series data. Through preprocessing and analyzing the original time series, the generalized regression neural network optimized by genetic algorithm is used to forecast the short-term electrical load. The experimental results demonstrate that the proposed hybrid model can not only achieve a good fitting ability, but it can also approximate the actual values when dealing with non-linear time series data with periodicity, trend and randomness.
Directory of Open Access Journals (Sweden)
S. Zhang
2018-03-01
Full Text Available Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
Short-term Power Load Forecasting Based on Balanced KNN
Lv, Xianlong; Cheng, Xingong; YanShuang; Tang, Yan-mei
2018-03-01
To improve the accuracy of load forecasting, a short-term load forecasting model based on balanced KNN algorithm is proposed; According to the load characteristics, the historical data of massive power load are divided into scenes by the K-means algorithm; In view of unbalanced load scenes, the balanced KNN algorithm is proposed to classify the scene accurately; The local weighted linear regression algorithm is used to fitting and predict the load; Adopting the Apache Hadoop programming framework of cloud computing, the proposed algorithm model is parallelized and improved to enhance its ability of dealing with massive and high-dimension data. The analysis of the household electricity consumption data for a residential district is done by 23-nodes cloud computing cluster, and experimental results show that the load forecasting accuracy and execution time by the proposed model are the better than those of traditional forecasting algorithm.
Electricity Price Forecasting Based on AOSVR and Outlier Detection
Institute of Scientific and Technical Information of China (English)
Zhou Dianmin; Gao Lin; Gao Feng
2005-01-01
Electricity price is of the first consideration for all the participants in electric power market and its characteristics are related to both market mechanism and variation in the behaviors of market participants. It is necessary to build a real-time price forecasting model with adaptive capability; and because there are outliers in the price data, they should be detected and filtrated in training the forecasting model by regression method. In view of these points, this paper presents an electricity price forecasting method based on accurate on-line support vector regression (AOSVR) and outlier detection. Numerical testing results show that the method is effective in forecasting the electricity prices in electric power market.
Environmental noise forecasting based on support vector machine
Fu, Yumei; Zan, Xinwu; Chen, Tianyi; Xiang, Shihan
2018-01-01
As an important pollution source, the noise pollution is always the researcher's focus. Especially in recent years, the noise pollution is seriously harmful to the human beings' environment, so the research about the noise pollution is a very hot spot. Some noise monitoring technologies and monitoring systems are applied in the environmental noise test, measurement and evaluation. But, the research about the environmental noise forecasting is weak. In this paper, a real-time environmental noise monitoring system is introduced briefly. This monitoring system is working in Mianyang City, Sichuan Province. It is monitoring and collecting the environmental noise about more than 20 enterprises in this district. Based on the large amount of noise data, the noise forecasting by the Support Vector Machine (SVM) is studied in detail. Compared with the time series forecasting model and the artificial neural network forecasting model, the SVM forecasting model has some advantages such as the smaller data size, the higher precision and stability. The noise forecasting results based on the SVM can provide the important and accuracy reference to the prevention and control of the environmental noise.
SOFT project: a new forecasting system based on satellite data
Pascual, Ananda; Orfila, A.; Alvarez, Alberto; Hernandez, E.; Gomis, D.; Barth, Alexander; Tintore, Joaquim
2002-01-01
The aim of the SOFT project is to develop a new ocean forecasting system by using a combination of satellite dat, evolutionary programming and numerical ocean models. To achieve this objective two steps are proved: (1) to obtain an accurate ocean forecasting system using genetic algorithms based on satellite data; and (2) to integrate the above new system into existing deterministic numerical models. Evolutionary programming will be employed to build 'intelligent' systems that, learning form the past ocean variability and considering the present ocean state, will be able to infer near future ocean conditions. Validation of the forecast skill will be carried out by comparing the forecasts fields with satellite and in situ observations. Validation with satellite observations will provide the expected errors in the forecasting system. Validation with in situ data will indicate the capabilities of the satellite based forecast information to improve the performance of the numerical ocean models. This later validation will be accomplished considering in situ measurements in a specific oceanographic area at two different periods of time. The first set of observations will be employed to feed the hybrid systems while the second set will be used to validate the hybrid and traditional numerical model results.
M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey
Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.
2016-01-01
We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.
International Nuclear Information System (INIS)
Xiao, Liye; Qian, Feng; Shao, Wei
2017-01-01
Highlights: • Propose a hybrid architecture based on a modified bat algorithm for multi-step wind speed forecasting. • Improve the accuracy of multi-step wind speed forecasting. • Modify bat algorithm with CG to improve optimized performance. - Abstract: As one of the most promising sustainable energy sources, wind energy plays an important role in energy development because of its cleanliness without causing pollution. Generally, wind speed forecasting, which has an essential influence on wind power systems, is regarded as a challenging task. Analyses based on single-step wind speed forecasting have been widely used, but their results are insufficient in ensuring the reliability and controllability of wind power systems. In this paper, a new forecasting architecture based on decomposing algorithms and modified neural networks is successfully developed for multi-step wind speed forecasting. Four different hybrid models are contained in this architecture, and to further improve the forecasting performance, a modified bat algorithm (BA) with the conjugate gradient (CG) method is developed to optimize the initial weights between layers and thresholds of the hidden layer of neural networks. To investigate the forecasting abilities of the four models, the wind speed data collected from four different wind power stations in Penglai, China, were used as a case study. The numerical experiments showed that the hybrid model including the singular spectrum analysis and general regression neural network with CG-BA (SSA-CG-BA-GRNN) achieved the most accurate forecasting results in one-step to three-step wind speed forecasting.
A Crowdsourcing-based Taiwan Scientific Earthquake Reporting System
Liang, W. T.; Lee, J. C.; Lee, C. F.
2017-12-01
To collect immediately field observations for any earthquake-induced ground damages, such as surface fault rupture, landslide, rock fall, liquefaction, and landslide-triggered dam or lake, etc., we are developing an earthquake damage reporting system which particularly relies on school teachers as volunteers after taking a series of training courses organized by this project. This Taiwan Scientific Earthquake Reporting (TSER) system is based on the Ushahidi mapping platform, which has been widely used for crowdsourcing on different purposes. Participants may add an app-like icon for mobile devices to this website at https://ies-tser.iis.sinica.edu.tw. Right after a potential damaging earthquake occurred in the Taiwan area, trained volunteers will be notified/dispatched to the source area to carry out field surveys and to describe the ground damages through this system. If the internet is available, they may also upload some relevant images in the field right away. This collected information will be shared with all public after a quick screen by the on-duty scientists. To prepare for the next strong earthquake, we set up a specific project on TSER for sharing spectacular/remarkable geologic features wherever possible. This is to help volunteers get used to this system and share any teachable material on this platform. This experimental, science-oriented crowdsourcing system was launched early this year. Together with a DYFI-like intensity reporting system, Taiwan Quake-Catcher Network, and some online games and teaching materials, the citizen seismology has been much improved in Taiwan in the last decade. All these constructed products are now either operated or promoted at the Taiwan Earthquake Research Center (TEC). With these newly developed platforms and materials, we are aiming not only to raise the earthquake awareness and preparedness, but also to encourage public participation in earthquake science in Taiwan.
Aggregated wind power generation probabilistic forecasting based on particle filter
International Nuclear Information System (INIS)
Li, Pai; Guan, Xiaohong; Wu, Jiang
2015-01-01
Highlights: • A new method for probabilistic forecasting of aggregated wind power generation. • A dynamic system is established based on a numerical weather prediction model. • The new method handles the non-Gaussian and time-varying wind power uncertainties. • Particle filter is applied to forecast predictive densities of wind generation. - Abstract: Probability distribution of aggregated wind power generation in a region is one of important issues for power system daily operation. This paper presents a novel method to forecast the predictive densities of the aggregated wind power generation from several geographically distributed wind farms, considering the non-Gaussian and non-stationary characteristics in wind power uncertainties. Based on a mesoscale numerical weather prediction model, a dynamic system is established to formulate the relationship between the atmospheric and near-surface wind fields of geographically distributed wind farms. A recursively backtracking framework based on the particle filter is applied to estimate the atmospheric state with the near-surface wind power generation measurements, and to forecast the possible samples of the aggregated wind power generation. The predictive densities of the aggregated wind power generation are then estimated based on these predicted samples by a kernel density estimator. In case studies, the new method presented is tested on a 9 wind farms system in Midwestern United States. The testing results that the new method can provide competitive interval forecasts for the aggregated wind power generation with conventional statistical based models, which validates the effectiveness of the new method
DROUGHT FORECASTING BASED ON MACHINE LEARNING OF REMOTE SENSING AND LONG-RANGE FORECAST DATA
Directory of Open Access Journals (Sweden)
J. Rhee
2016-06-01
Full Text Available The reduction of drought impacts may be achieved through sustainable drought management and proactive measures against drought disaster. Accurate and timely provision of drought information is essential. In this study, drought forecasting models to provide high-resolution drought information based on drought indicators for ungauged areas were developed. The developed models predict drought indices of the 6-month Standardized Precipitation Index (SPI6 and the 6-month Standardized Precipitation Evapotranspiration Index (SPEI6. An interpolation method based on multiquadric spline interpolation method as well as three machine learning models were tested. Three machine learning models of Decision Tree, Random Forest, and Extremely Randomized Trees were tested to enhance the provision of drought initial conditions based on remote sensing data, since initial conditions is one of the most important factors for drought forecasting. Machine learning-based methods performed better than interpolation methods for both classification and regression, and the methods using climatology data outperformed the methods using long-range forecast. The model based on climatological data and the machine learning method outperformed overall.
Directory of Open Access Journals (Sweden)
Rabounski D.
2014-04-01
Full Text Available The Shnoll effect manifests itself in the fine structure of the noise registered in very sta- ble processes, where the magnitude of signal and the average noise remain unchanged. It is found in the periodic fluctuation of the fine structure of the noise according to the cosmic cycles connected with stars, the Sun, and the Moon. Th e Shnoll effect is ex- plained herein, employing the framework of General Relativity, as the twin / entangled synchronization states of the observer’s reference frame. The states are repeated while the observer travels, in common with the Earth, through the c osmic grid of the geodesic synchronization paths that connect his local reference fra me with the reference frames of other cosmic bodies. These synchronization periods matc h the periods that are man- ifested due to the Shnoll e ff ect, regardless of which process produces the noise. These synchronization periods are expected to exist in the noise o f natural processes of any type (physics, biology, social, etc. as well as in such arti ficial processes as computer- software random-number generation. This conclusion accor ds with what was registered according the Shnoll effect. The theory not only explains the Shnoll effect but also al- lows for forecasting fluctuations in the stock exchange mark et, fluctuations of weather, earthquakes, and other cataclysms.
Fuzzy forecasting based on fuzzy-trend logical relationship groups.
Chen, Shyi-Ming; Wang, Nai-Yi
2010-10-01
In this paper, we present a new method to predict the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) based on fuzzy-trend logical relationship groups (FTLRGs). The proposed method divides fuzzy logical relationships into FTLRGs based on the trend of adjacent fuzzy sets appearing in the antecedents of fuzzy logical relationships. First, we apply an automatic clustering algorithm to cluster the historical data into intervals of different lengths. Then, we define fuzzy sets based on these intervals of different lengths. Then, the historical data are fuzzified into fuzzy sets to derive fuzzy logical relationships. Then, we divide the fuzzy logical relationships into FTLRGs for forecasting the TAIEX. Moreover, we also apply the proposed method to forecast the enrollments and the inventory demand, respectively. The experimental results show that the proposed method gets higher average forecasting accuracy rates than the existing methods.
Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data
Funning, G. J.; Cockett, R.
2012-12-01
InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median
Directory of Open Access Journals (Sweden)
V. Tramutoli
1996-06-01
Full Text Available An autoregressive model was selected to describe geoelectrical time series. An objective technique was subsequently applied to analyze and discriminate values above (below an a priorifixed threshold possibly related to seismic events. A complete check of the model and the main guidelines to estimate the occurrence probability of extreme events are reported. A first application of the proposed technique is discussed through the analysis of the experimental data recorded by an automatic station located in Tito, a small town on the Apennine chain in Southern Italy. This region was hit by the November 1980 Irpinia-Basilicata earthquake and it is one of most active areas of the Mediterranean region. After a preliminary filtering procedure to reduce the influence of external parameters (i.e. the meteo-climatic effects, it was demonstrated that the geoelectrical residual time series are well described by means of a second order autoregressive model. Our findings outline a statistical methodology to evaluate the efficiency of electrical seismic precursors.
Jones, K. B., II; Saxton, P. T.
2013-12-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two
Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions
White, Randall A.; McCausland, Wendy
2016-01-01
We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from:
Pollutant forecasting error based on persistence of wind direction
International Nuclear Information System (INIS)
Cooper, R.E.
1978-01-01
The purpose of this report is to provide a means of estimating the reliability of forecasts of downwind pollutant concentrations from atmospheric puff releases. These forecasts are based on assuming the persistence of wind direction as determined at the time of release. This initial forecast will be used to deploy survey teams, to predict population centers that may be affected, and to estimate the amount of time available for emergency response. Reliability of forecasting is evaluated by developing a cumulative probability distribution of error as a function of lapsed time following an assumed release. The cumulative error is determined by comparing the forecast pollutant concentration with the concentration measured by sampling along the real-time meteorological trajectory. It may be concluded that the assumption of meteorological persistence for emergency response is not very good for periods longer than 3 hours. Even within this period, the possibiity for large error exists due to wind direction shifts. These shifts could affect population areas totally different from those areas first indicated
Combined time-varying forecast based on the proper scoring approach for wind power generation
DEFF Research Database (Denmark)
Chen, Xingying; Jiang, Yu; Yu, Kun
2017-01-01
Compared with traditional point forecasts, combined forecast have been proposed as an effective method to provide more accurate forecasts than individual model. However, the literature and research focus on wind-power combined forecasts are relatively limited. Here, based on forecasting error...... distribution, a proper scoring approach is applied to combine plausible models to form an overall time-varying model for the next day forecasts, rather than weights-based combination. To validate the effectiveness of the proposed method, real data of 3 years were used for testing. Simulation results...... demonstrate that the proposed method improves the accuracy of overall forecasts, even compared with a numerical weather prediction....
Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.
2015-02-01
We present the application of interactive 3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and forecast wind field resolution. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (three to seven days before take-off).
Machine learning based switching model for electricity load forecasting
Energy Technology Data Exchange (ETDEWEB)
Fan, Shu; Lee, Wei-Jen [Energy Systems Research Center, The University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen, Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan)
2008-06-15
In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma. (author)
Machine learning based switching model for electricity load forecasting
Energy Technology Data Exchange (ETDEWEB)
Fan Shu [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan); Lee, Weijen [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States)], E-mail: wlee@uta.edu
2008-06-15
In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma.
Machine learning based switching model for electricity load forecasting
International Nuclear Information System (INIS)
Fan Shu; Chen Luonan; Lee, Weijen
2008-01-01
In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma
Energy demand forecasting method based on international statistical data
International Nuclear Information System (INIS)
Glanc, Z.; Kerner, A.
1997-01-01
Poland is in a transition phase from a centrally planned to a market economy; data collected under former economic conditions do not reflect a market economy. Final energy demand forecasts are based on the assumption that the economic transformation in Poland will gradually lead the Polish economy, technologies and modes of energy use, to the same conditions as mature market economy countries. The starting point has a significant influence on the future energy demand and supply structure: final energy consumption per capita in 1992 was almost half the average of OECD countries; energy intensity, based on Purchasing Power Parities (PPP) and referred to GDP, is more than 3 times higher in Poland. A method of final energy demand forecasting based on regression analysis is described in this paper. The input data are: output of macroeconomic and population growth forecast; time series 1970-1992 of OECD countries concerning both macroeconomic characteristics and energy consumption; and energy balance of Poland for the base year of the forecast horizon. (author). 1 ref., 19 figs, 4 tabs
Energy demand forecasting method based on international statistical data
Energy Technology Data Exchange (ETDEWEB)
Glanc, Z; Kerner, A [Energy Information Centre, Warsaw (Poland)
1997-09-01
Poland is in a transition phase from a centrally planned to a market economy; data collected under former economic conditions do not reflect a market economy. Final energy demand forecasts are based on the assumption that the economic transformation in Poland will gradually lead the Polish economy, technologies and modes of energy use, to the same conditions as mature market economy countries. The starting point has a significant influence on the future energy demand and supply structure: final energy consumption per capita in 1992 was almost half the average of OECD countries; energy intensity, based on Purchasing Power Parities (PPP) and referred to GDP, is more than 3 times higher in Poland. A method of final energy demand forecasting based on regression analysis is described in this paper. The input data are: output of macroeconomic and population growth forecast; time series 1970-1992 of OECD countries concerning both macroeconomic characteristics and energy consumption; and energy balance of Poland for the base year of the forecast horizon. (author). 1 ref., 19 figs, 4 tabs.
Takagawa, T.
2016-12-01
An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.
International Nuclear Information System (INIS)
Wang, Yamin; Wu, Lei
2016-01-01
This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to
Engineering Seismic Base Layer for Defining Design Earthquake Motion
International Nuclear Information System (INIS)
Yoshida, Nozomu
2008-01-01
Engineer's common sense that incident wave is common in a widespread area at the engineering seismic base layer is shown not to be correct. An exhibiting example is first shown, which indicates that earthquake motion at the ground surface evaluated by the analysis considering the ground from a seismic bedrock to a ground surface simultaneously (continuous analysis) is different from the one by the analysis in which the ground is separated at the engineering seismic base layer and analyzed separately (separate analysis). The reason is investigated by several approaches. Investigation based on eigen value problem indicates that the first predominant period in the continuous analysis cannot be found in the separate analysis, and predominant period at higher order does not match in the upper and lower ground in the separate analysis. The earthquake response analysis indicates that reflected wave at the engineering seismic base layer is not zero, which indicates that conventional engineering seismic base layer does not work as expected by the term ''base''. All these results indicate that wave that goes down to the deep depths after reflecting in the surface layer and again reflects at the seismic bedrock cannot be neglected in evaluating the response at the ground surface. In other words, interaction between the surface layer and/or layers between seismic bedrock and engineering seismic base layer cannot be neglected in evaluating the earthquake motion at the ground surface
NA
2005-01-01
This paper presents a study of errors in forecasting the population of Metropolitan Statistical Areas and the Primary MSAs of Consolidated Metropolitan Statistical Areas and New England MAs. The forecasts are for the year 2000 and are based on a semi-structural model estimated by Mills and Lubelle using 1970 to 1990 census data on population, employment and relative real wages. This model allows the testing of regional effects on population and employment growth. The year 2000 forecasts are f...
Use of ground-based wind profiles in mesoscale forecasting
Schlatter, Thomas W.
1985-01-01
A brief review is presented of recent uses of ground-based wind profile data in mesoscale forecasting. Some of the applications are in real time, and some are after the fact. Not all of the work mentioned here has been published yet, but references are given wherever possible. As Gage and Balsley (1978) point out, sensitive Doppler radars have been used to examine tropospheric wind profiles since the 1970's. It was not until the early 1980's, however, that the potential contribution of these instruments to operational forecasting and numerical weather prediction became apparent. Profiler winds and radiosonde winds compare favorably, usually within a few m/s in speed and 10 degrees in direction (see Hogg et al., 1983), but the obvious advantage of the profiler is its frequent (hourly or more often) sampling of the same volume. The rawinsonde balloon is launched only twice a day and drifts with the wind. In this paper, I will: (1) mention two operational uses of data from a wind profiling system developed jointly by the Wave Propagation and Aeronomy Laboratories of NOAA; (2) describe a number of displays of these same data on a workstation for mesoscale forecasting developed by the Program for Regional Observing and Forecasting Services (PROFS); and (3) explain some interesting diagnostic calculations performed by meteorologists of the Wave Propagation Laboratory.
Field, Edward; Porter, Keith; Milner, Kevn
2017-01-01
We present a prototype operational loss model based on UCERF3-ETAS, which is the third Uniform California Earthquake Rupture Forecast with an Epidemic Type Aftershock Sequence (ETAS) component. As such, UCERF3-ETAS represents the first earthquake forecast to relax fault segmentation assumptions and to include multi-fault ruptures, elastic-rebound, and spatiotemporal clustering, all of which seem important for generating realistic and useful aftershock statistics. UCERF3-ETAS is nevertheless an approximation of the system, however, so usefulness will vary and potential value needs to be ascertained in the context of each application. We examine this question with respect to statewide loss estimates, exemplifying how risk can be elevated by orders of magnitude due to triggered events following various scenario earthquakes. Two important considerations are the probability gains, relative to loss likelihoods in the absence of main shocks, and the rapid decay of gains with time. Significant uncertainties and model limitations remain, so we hope this paper will inspire similar analyses with respect to other risk metrics to help ascertain whether operationalization of UCERF3-ETAS would be worth the considerable resources required.
East Asian winter monsoon forecasting schemes based on the NCEP's climate forecast system
Tian, Baoqiang; Fan, Ke; Yang, Hongqing
2017-12-01
The East Asian winter monsoon (EAWM) is the major climate system in the Northern Hemisphere during boreal winter. In this study, we developed two schemes to improve the forecasting skill of the interannual variability of the EAWM index (EAWMI) using the interannual increment prediction method, also known as the DY method. First, we found that version 2 of the NCEP's Climate Forecast System (CFSv2) showed higher skill in predicting the EAWMI in DY form than not. So, based on the advantage of the DY method, Scheme-I was obtained by adding the EAWMI DY predicted by CFSv2 to the observed EAWMI in the previous year. This scheme showed higher forecasting skill than CFSv2. Specifically, during 1983-2016, the temporal correlation coefficient between the Scheme-I-predicted and observed EAWMI was 0.47, exceeding the 99% significance level, with the root-mean-square error (RMSE) decreased by 12%. The autumn Arctic sea ice and North Pacific sea surface temperature (SST) are two important external forcing factors for the interannual variability of the EAWM. Therefore, a second (hybrid) prediction scheme, Scheme-II, was also developed. This scheme not only involved the EAWMI DY of CFSv2, but also the sea-ice concentration (SIC) observed the previous autumn in the Laptev and East Siberian seas and the temporal coefficients of the third mode of the North Pacific SST in DY form. We found that a negative SIC anomaly in the preceding autumn over the Laptev and the East Siberian seas could lead to a significant enhancement of the Aleutian low and East Asian westerly jet in the following winter. However, the intensity of the winter Siberian high was mainly affected by the third mode of the North Pacific autumn SST. Scheme-I and Scheme-II also showed higher predictive ability for the EAWMI in negative anomaly years compared to CFSv2. More importantly, the improvement in the prediction skill of the EAWMI by the new schemes, especially for Scheme-II, could enhance the forecasting skill of
Physics-based forecasting of induced seismicity at Groningen gas field, the Netherlands
Dempsey, David; Suckale, Jenny
2017-08-01
Earthquakes induced by natural gas extraction from the Groningen reservoir, the Netherlands, put local communities at risk. Responsible operation of a reservoir whose gas reserves are of strategic importance to the country requires understanding of the link between extraction and earthquakes. We synthesize observations and a model for Groningen seismicity to produce forecasts for felt seismicity (M > 2.5) in the period February 2017 to 2024. Our model accounts for poroelastic earthquake triggering and rupture on the 325 largest reservoir faults, using an ensemble approach to model unknown heterogeneity and replicate earthquake statistics. We calculate probability distributions for key model parameters using a Bayesian method that incorporates the earthquake observations with a nonhomogeneous Poisson process. Our analysis indicates that the Groningen reservoir was not critically stressed prior to the start of production. Epistemic uncertainty and aleatoric uncertainty are incorporated into forecasts for three different future extraction scenarios. The largest expected earthquake was similar for all scenarios, with a 5% likelihood of exceeding M 4.0.
Forecast of Frost Days Based on Monthly Temperatures
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
Aiolfi, Marco; Capistrán, Carlos; Timmermann, Allan
2010-01-01
We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factor-augmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based fore...
Wavelet-based verification of the quantitative precipitation forecast
Yano, Jun-Ichi; Jakubiak, Bogumil
2016-06-01
This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.
Hodgson, A.; van Praag, B.
2006-01-01
In this paper, we test whether directors’ (corporate insiders) trading in Australia, based on accounting accruals, provides incremental information in forecasting a firm's economic performance. We determine that directors’ trading on negative accruals in larger firms has greater forecasting content
Earthquakes clustering based on the magnitude and the depths in Molluca Province
Energy Technology Data Exchange (ETDEWEB)
Wattimanela, H. J., E-mail: hwattimaela@yahoo.com [Pattimura University, Ambon (Indonesia); Institute of Technology Bandung, Bandung (Indonesia); Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T. [Institute of Technology Bandung, Bandung (Indonesia)
2015-12-22
In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province.
Earthquakes clustering based on the magnitude and the depths in Molluca Province
International Nuclear Information System (INIS)
Wattimanela, H. J.; Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T.
2015-01-01
In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province
Research on light rail electric load forecasting based on ARMA model
Huang, Yifan
2018-04-01
The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.
CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking
Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.
2017-12-01
We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.
Xu, Yongbin; Xie, Haihong; Wu, Liuyi
2018-05-01
The share of coal transportation in the total railway freight volume is about 50%. As is widely acknowledged, coal industry is vulnerable to the economic situation and national policies. Coal transportation volume fluctuates significantly under the new economic normal. Grasp the overall development trend of railway coal transportation market, have important reference and guidance significance to the railway and coal industry decision-making. By analyzing the economic indicators and policy implications, this paper expounds the trend of the coal transportation volume, and further combines the economic indicators with the high correlation with the coal transportation volume with the traditional traffic prediction model to establish a combined forecasting model based on the back propagation neural network. The error of the prediction results is tested, which proves that the method has higher accuracy and has practical application.
Spatial Evaluation and Verification of Earthquake Simulators
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
Study of Earthquake Disaster Prediction System of Langfang city Based on GIS
Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei
2017-07-01
In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.
Timmermann, Allan G
2005-01-01
Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the ex-ante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this paper we analyse theoretically the factors that determine the advantages from combining forecasts (for example, the d...
Probabilistic runoff volume forecasting in risk-based optimization for RTC of urban drainage systems
DEFF Research Database (Denmark)
Löwe, Roland; Vezzaro, Luca; Mikkelsen, Peter Steen
2016-01-01
overflow risk. The stochastic control framework and the performance of the runoff forecasting models are tested in a case study in Copenhagen (76 km2 with 6 sub-catchments and 7 control points) using 2-h radar rainfall forecasts and inlet flows to control points computed from a variety of noisy...... smoothing. Simulations demonstrate notable improvements of the control efficiency when considering forecast information and additionally when considering forecast uncertainty, compared with optimization based on current basin fillings only....
Thermal infrared anomalies of several strong earthquakes.
Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying
2013-01-01
In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.
Operational forecast products and applications based on WRF/Chem
Hirtl, Marcus; Flandorfer, Claudia; Langer, Matthias; Mantovani, Simone; Olefs, Marc; Schellander-Gorgas, Theresa
2015-04-01
The responsibilities of the national weather service of Austria (ZAMG) include the support of the federal states and the public in questions connected to the protection of the environment in the frame of advisory and counseling services as well as expert opinions. The ZAMG conducts daily Air-Quality forecasts using the on-line coupled model WRF/Chem. The mother domain expands over Europe, North Africa and parts of Russia. The nested domain includes the alpine region and has a horizontal resolution of 4 km. Local emissions (Austria) are used in combination with European inventories (TNO and EMEP) for the simulations. The modeling system is presented and the results from the evaluation of the assimilation of pollutants using the 3D-VAR software GSI is shown. Currently observational data (PM10 and O3) from the Austrian Air-Quality network and from European stations (EEA) are assimilated into the model on an operational basis. In addition PM maps are produced using Aerosol Optical Thickness (AOT) observations from MODIS in combination with model data using machine learning techniques. The modeling system is operationally evaluated with different data sets. The emphasis of the application is on the forecast of pollutants which are compared to the hourly values (PM10, O3 and NO2) of the Austrian Air-Quality network. As the meteorological conditions are important for transport and chemical processes, some parameters like wind and precipitation are automatically evaluated (SAL diagrams, maps, …) with other models (e.g. ECMWF, AROME, …) and ground stations via web interface. The prediction of the AOT is also important for operators of solar power plants. In the past Numerical Weather Prediction (NWP) models were used to predict the AOT based on cloud forecasts at the ZAMG. These models do not consider the spatial and temporal variation of the aerosol distribution in the atmosphere with a consequent impact on the accuracy of forecasts especially during clear-sky days
Nowcasting Earthquakes and Tsunamis
Rundle, J. B.; Turcotte, D. L.
2017-12-01
The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk
Star point centroid algorithm based on background forecast
Wang, Jin; Zhao, Rujin; Zhu, Nan
2014-09-01
The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.
Ensemble-based Probabilistic Forecasting at Horns Rev
DEFF Research Database (Denmark)
Pinson, Pierre; Madsen, Henrik
2009-01-01
forecasting methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. This modelemploys local polynomial regression, and is adoptively estimated with an orthogonal fitting method. The obtained ensemble forecasts of wind power...
Short-Term Wind Power Interval Forecasting Based on an EEMD-RT-RVM Model
Directory of Open Access Journals (Sweden)
Haixiang Zang
2016-01-01
Full Text Available Accurate short-term wind power forecasting is important for improving the security and economic success of power grids. Existing wind power forecasting methods are mostly types of deterministic point forecasting. Deterministic point forecasting is vulnerable to forecasting errors and cannot effectively deal with the random nature of wind power. In order to solve the above problems, we propose a short-term wind power interval forecasting model based on ensemble empirical mode decomposition (EEMD, runs test (RT, and relevance vector machine (RVM. First, in order to reduce the complexity of data, the original wind power sequence is decomposed into a plurality of intrinsic mode function (IMF components and residual (RES component by using EEMD. Next, we use the RT method to reconstruct the components and obtain three new components characterized by the fine-to-coarse order. Finally, we obtain the overall forecasting results (with preestablished confidence levels by superimposing the forecasting results of each new component. Our results show that, compared with existing methods, our proposed short-term interval forecasting method has less forecasting errors, narrower interval widths, and larger interval coverage percentages. Ultimately, our forecasting model is more suitable for engineering applications and other forecasting methods for new energy.
Ionospheric earthquake effects detection based on Total Electron Content (TEC) GPS Correlation
Sunardi, Bambang; Muslim, Buldan; Eka Sakya, Andi; Rohadi, Supriyanto; Sulastri; Murjaya, Jaya
2018-03-01
Advances in science and technology showed that ground-based GPS receiver was able to detect ionospheric Total Electron Content (TEC) disturbances caused by various natural phenomena such as earthquakes. One study of Tohoku (Japan) earthquake, March 11, 2011, magnitude M 9.0 showed TEC fluctuations observed from GPS observation network spread around the disaster area. This paper discussed the ionospheric earthquake effects detection using TEC GPS data. The case studies taken were Kebumen earthquake, January 25, 2014, magnitude M 6.2, Sumba earthquake, February 12, 2016, M 6.2 and Halmahera earthquake, February 17, 2016, M 6.1. TEC-GIM (Global Ionosphere Map) correlation methods for 31 days were used to monitor TEC anomaly in ionosphere. To ensure the geomagnetic disturbances due to solar activity, we also compare with Dst index in the same time window. The results showed anomalous ratio of correlation coefficient deviation to its standard deviation upon occurrences of Kebumen and Sumba earthquake, but not detected a similar anomaly for the Halmahera earthquake. It was needed a continous monitoring of TEC GPS data to detect the earthquake effects in ionosphere. This study giving hope in strengthening the earthquake effect early warning system using TEC GPS data. The method development of continuous TEC GPS observation derived from GPS observation network that already exists in Indonesia is needed to support earthquake effects early warning systems.
Using adaptive network based fuzzy inference system to forecast regional electricity loads
International Nuclear Information System (INIS)
Ying, L.-C.; Pan, M.-C.
2008-01-01
Since accurate regional load forecasting is very important for improvement of the management performance of the electric industry, various regional load forecasting methods have been developed. The purpose of this study is to apply the adaptive network based fuzzy inference system (ANFIS) model to forecast the regional electricity loads in Taiwan and demonstrate the forecasting performance of this model. Based on the mean absolute percentage errors and statistical results, we can see that the ANFIS model has better forecasting performance than the regression model, artificial neural network (ANN) model, support vector machines with genetic algorithms (SVMG) model, recurrent support vector machines with genetic algorithms (RSVMG) model and hybrid ellipsoidal fuzzy systems for time series forecasting (HEFST) model. Thus, the ANFIS model is a promising alternative for forecasting regional electricity loads
Using adaptive network based fuzzy inference system to forecast regional electricity loads
Energy Technology Data Exchange (ETDEWEB)
Ying, Li-Chih [Department of Marketing Management, Central Taiwan University of Science and Technology, 11, Pu-tzu Lane, Peitun, Taichung City 406 (China); Pan, Mei-Chiu [Graduate Institute of Management Sciences, Nanhua University, 32, Chung Keng Li, Dalin, Chiayi 622 (China)
2008-02-15
Since accurate regional load forecasting is very important for improvement of the management performance of the electric industry, various regional load forecasting methods have been developed. The purpose of this study is to apply the adaptive network based fuzzy inference system (ANFIS) model to forecast the regional electricity loads in Taiwan and demonstrate the forecasting performance of this model. Based on the mean absolute percentage errors and statistical results, we can see that the ANFIS model has better forecasting performance than the regression model, artificial neural network (ANN) model, support vector machines with genetic algorithms (SVMG) model, recurrent support vector machines with genetic algorithms (RSVMG) model and hybrid ellipsoidal fuzzy systems for time series forecasting (HEFST) model. Thus, the ANFIS model is a promising alternative for forecasting regional electricity loads. (author)
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Uncertainty of Flood Forecasting Based on Radar Rainfall Data Assimilation
Directory of Open Access Journals (Sweden)
Xinchi Chen
2016-01-01
Full Text Available Precipitation is the core data input to hydrological forecasting. The uncertainty in precipitation forecast data can lead to poor performance of predictive hydrological models. Radar-based precipitation measurement offers advantages over ground-based measurement in the quantitative estimation of temporal and spatial aspects of precipitation, but errors inherent in this method will still act to reduce the performance. Using data from White Lotus River of Hubei Province, China, five methods were used to assimilate radar rainfall data transformed from the classified Z-R relationship, and the postassimilation data were compared with precipitation measured by rain gauges. The five sets of assimilated rainfall data were then used as input to the Xinanjiang model. The effect of precipitation data input error on runoff simulation was analyzed quantitatively by disturbing the input data using the Breeding of Growing Modes method. The results of practical application demonstrated that the statistical weight integration and variational assimilation methods were superior. The corresponding performance in flood hydrograph prediction was also better using the statistical weight integration and variational methods compared to the others. It was found that the errors of radar rainfall data disturbed by the Breeding of Growing Modes had a tendency to accumulate through the hydrological model.
Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.
2016-01-01
The U.S. Geological Survey (USGS) has produced a one‐year (2016) probabilistic seismic‐hazard assessment for the central and eastern United States (CEUS) that includes contributions from both induced and natural earthquakes that are constructed with probabilistic methods using alternative data and inputs. This hazard assessment builds on our 2016 final model (Petersen et al., 2016) by adding sensitivity studies, illustrating hazard in new ways, incorporating new population data, and discussing potential improvements. The model considers short‐term seismic activity rates (primarily 2014–2015) and assumes that the activity rates will remain stationary over short time intervals. The final model considers different ways of categorizing induced and natural earthquakes by incorporating two equally weighted earthquake rate submodels that are composed of alternative earthquake inputs for catalog duration, smoothing parameters, maximum magnitudes, and ground‐motion models. These alternatives represent uncertainties on how we calculate earthquake occurrence and the diversity of opinion within the science community. In this article, we also test sensitivity to the minimum moment magnitude between M 4 and M 4.7 and the choice of applying a declustered catalog with b=1.0 rather than the full catalog with b=1.3. We incorporate two earthquake rate submodels: in the informed submodel we classify earthquakes as induced or natural, and in the adaptive submodel we do not differentiate. The alternative submodel hazard maps both depict high hazard and these are combined in the final model. Results depict several ground‐shaking measures as well as intensity and include maps showing a high‐hazard level (1% probability of exceedance in 1 year or greater). Ground motions reach 0.6g horizontal peak ground acceleration (PGA) in north‐central Oklahoma and southern Kansas, and about 0.2g PGA in the Raton basin of Colorado and New Mexico, in central Arkansas, and in
Directory of Open Access Journals (Sweden)
Weide Li
2017-01-01
Full Text Available Accurate electric power demand forecasting plays a key role in electricity markets and power systems. The electric power demand is usually a non-linear problem due to various unknown reasons, which make it difficult to get accurate prediction by traditional methods. The purpose of this paper is to propose a novel hybrid forecasting method for managing and scheduling the electricity power. EEMD-SCGRNN-PSVR, the proposed new method, combines ensemble empirical mode decomposition (EEMD, seasonal adjustment (S, cross validation (C, general regression neural network (GRNN and support vector regression machine optimized by the particle swarm optimization algorithm (PSVR. The main idea of EEMD-SCGRNN-PSVR is respectively to forecast waveform and trend component that hidden in demand series to substitute directly forecasting original electric demand. EEMD-SCGRNN-PSVR is used to predict the one week ahead half-hour’s electricity demand in two data sets (New South Wales (NSW and Victorian State (VIC in Australia. Experimental results show that the new hybrid model outperforms the other three models in terms of forecasting accuracy and model robustness.
Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.
2017-12-01
Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the Mizu
Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility
Tuba, Zoltán; Bottyán, Zsolt
2018-04-01
Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.
A Novel Flood Forecasting Method Based on Initial State Variable Correction
Directory of Open Access Journals (Sweden)
Kuang Li
2017-12-01
Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.
Efficient Resources Provisioning Based on Load Forecasting in Cloud
Directory of Open Access Journals (Sweden)
Rongdong Hu
2014-01-01
Full Text Available Cloud providers should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate resources in a fine-grained mode according to application’s actual resources demand. The necessary precondition of this strategy is obtaining future load information in advance. We propose a multi-step-ahead load forecasting method, KSwSVR, based on statistical learning theory which is suitable for the complex and dynamic characteristics of the cloud computing environment. It integrates an improved support vector regression algorithm and Kalman smoother. Public trace data taken from multitypes of resources were used to verify its prediction accuracy, stability, and adaptability, comparing with AR, BPNN, and standard SVR. Subsequently, based on the predicted results, a simple and efficient strategy is proposed for resource provisioning. CPU allocation experiment indicated it can effectively reduce resources consumption while meeting service level agreements requirements.
Market-based demand forecasting promotes informed strategic financial planning.
Beech, A J
2001-11-01
Market-based demand forecasting is a method of estimating future demand for a healthcare organization's services by using a broad range of data that describe the nature of demand within the organization's service area. Such data include the primary and secondary service areas, the service-area populations by various demographic groupings, discharge utilization rates, market size, and market share by service line and organizationwide. Based on observable market dynamics, strategic planners can make a variety of explicit assumptions about future trends regarding these data to develop scenarios describing potential future demand. Financial planners then can evaluate each scenario to determine its potential effect on selected financial and operational measures, such as operating margin, days cash on hand, and debt-service coverage, and develop a strategic financial plan that covers a range of contingencies.
Rough Precipitation Forecasts based on Analogue Method: an Operational System
Raffa, Mario; Mercogliano, Paola; Lacressonnière, Gwendoline; Guillaume, Bruno; Deandreis, Céline; Castanier, Pierre
2017-04-01
In the framework of the Climate KIC partnership, has been funded the project Wat-Ener-Cast (WEC), coordinated by ARIA Technologies, having the goal to adapt, through tailored weather-related forecast, the water and energy operations to the increased weather fluctuation and to climate change. The WEC products allow providing high quality forecast suited in risk and opportunities assessment dashboard for water and energy operational decisions and addressing the needs of sewage/water distribution operators, energy transport & distribution system operators, energy manager and wind energy producers. A common "energy water" web platform, able to interface with newest smart water-energy IT network have been developed. The main benefit by sharing resources through the "WEC platform" is the possibility to optimize the cost and the procedures of safety and maintenance team, in case of alerts and, finally to reduce overflows. Among the different services implemented on the WEC platform, ARIA have developed a product having the goal to support sewage/water distribution operators, based on a gradual forecast information system ( at 48hrs/24hrs/12hrs horizons) of heavy precipitation. For each fixed deadline different type of operation are implemented: 1) 48hour horizon, organisation of "on call team", 2) 24 hour horizon, update and confirm the "on call team", 3) 12 hour horizon, secure human resources and equipment (emptying storage basins, pipes manipulations …). More specifically CMCC have provided a statistical downscaling method in order to provide a "rough" daily local precipitation at 24 hours, especially when high precipitation values are expected. This statistical technique consists of an adaptation of analogue method based on ECMWF data (analysis and forecast at 24 hours). One of the most advantages of this technique concerns a lower computational burden and budget compared to running a Numerical Weather Prediction (NWP) model, also if, of course it provides only this
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...
Probing magma reservoirs to improve volcano forecasts
Lowenstern, Jacob B.; Sisson, Thomas W.; Hurwitz, Shaul
2017-01-01
When it comes to forecasting eruptions, volcano observatories rely mostly on real-time signals from earthquakes, ground deformation, and gas discharge, combined with probabilistic assessments based on past behavior [Sparks and Cashman, 2017]. There is comparatively less reliance on geophysical and petrological understanding of subsurface magma reservoirs.
Stuparu, Dana; Bachmann, Daniel; Bogaard, Tom; Twigt, Daniel; Verkade, Jan; de Bruijn, Karin; de Leeuw, Annemargreet
2017-04-01
Flood forecasts, warning and emergency response are important components in flood risk management. Most flood forecasting systems use models to translate weather predictions to forecasted discharges or water levels. However, this information is often not sufficient for real time decisions. A sound understanding of the reliability of embankments and flood dynamics is needed to react timely and reduce the negative effects of the flood. Where are the weak points in the dike system? When, how much and where the water will flow? When and where is the greatest impact expected? Model-based flood impact forecasting tries to answer these questions by adding new dimensions to the existing forecasting systems by providing forecasted information about: (a) the dike strength during the event (reliability), (b) the flood extent in case of an overflow or a dike failure (flood spread) and (c) the assets at risk (impacts). This work presents three study-cases in which such a set-up is applied. Special features are highlighted. Forecasting of dike strength. The first study-case focusses on the forecast of dike strength in the Netherlands for the river Rhine branches Waal, Nederrijn and IJssel. A so-called reliability transformation is used to translate the predicted water levels at selected dike sections into failure probabilities during a flood event. The reliability of a dike section is defined by fragility curves - a summary of the dike strength conditional to the water level. The reliability information enhances the emergency management and inspections of embankments. Ensemble forecasting. The second study-case shows the setup of a flood impact forecasting system in Dumfries, Scotland. The existing forecasting system is extended with a 2D flood spreading model in combination with the Delft-FIAT impact model. Ensemble forecasts are used to make use of the uncertainty in the precipitation forecasts, which is useful to quantify the certainty of a forecasted flood event. From global
Design spectrums based on earthquakes recorded at tarbela
International Nuclear Information System (INIS)
Rizwan, M.; Ilyas, M.; Masood, A.
2008-01-01
First Seismological Network in Pakistan was setup in early 1969 at Tarbela, which is the location of largest water reservoir of the country. The network consisted of Analog Accelerograms and Seismographs. Since the installation many seismic events of different magnitudes occurred and were recorded by the installed instruments. The analog form of recorded time histories has been digitized and data of twelve earthquakes, irrespective of the type of soil, has been used to derive elastic design spectrums for Tarbela, Pakistan. The PGA scaling factors, based on the risk analysis studies carried out for the region, for each component are also given. The design spectrums suggested will be very useful for carrying out new construction in the region and its surroundings. The digitized data of time histories will be useful for seismic response analysis of structures and seismic risk analysis of the region. (author)
Improving volcanic ash forecasts with ensemble-based data assimilation
Fu, Guangliang
2017-01-01
The 2010 Eyjafjallajökull volcano eruption had serious consequences to civil aviation. This has initiated a lot of research on volcanic ash forecasting in recent years. For forecasting the volcanic ash transport after eruption onset, a volcanic ash transport and diffusion model (VATDM) needs to be
Identified EM Earthquake Precursors
Jones, Kenneth, II; Saxton, Patrick
2014-05-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method
Energy Technology Data Exchange (ETDEWEB)
Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [University of Texas at Dallas; Feng, Cong [University of Texas at Dallas; Wang, Zhenke [University of Texas at Dallas; Zhang, Jie [University of Texas at Dallas
2018-02-01
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint
Energy Technology Data Exchange (ETDEWEB)
Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [Univ. of Texas-Dallas, Richardson, TX (United States); Feng, Cong [Univ. of Texas-Dallas, Richardson, TX (United States); Wang, Zhenke [Univ. of Texas-Dallas, Richardson, TX (United States); Zhang, Jie [Univ. of Texas-Dallas, Richardson, TX (United States)
2017-08-31
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.
Operational foreshock forecasting: Fifteen years after
Ogata, Y.
2010-12-01
We are concerned with operational forecasting of the probability that events are foreshocks of a forthcoming earthquake that is significantly larger (mainshock). Specifically, we define foreshocks as the preshocks substantially smaller than the mainshock by a magnitude gap of 0.5 or larger. The probability gain of foreshock forecast is extremely high compare to long-term forecast by renewal processes or various alarm-based intermediate-term forecasts because of a large event’s low occurrence rate in a short period and a narrow target region. Thus, it is desired to establish operational foreshock probability forecasting as seismologists have done for aftershocks. When a series of earthquakes occurs in a region, we attempt to discriminate foreshocks from a swarm or mainshock-aftershock sequence. Namely, after real time identification of an earthquake cluster using methods such as the single-link algorithm, the probability is calculated by applying statistical features that discriminate foreshocks from other types of clusters, by considering the events' stronger proximity in time and space and tendency towards chronologically increasing magnitudes. These features were modeled for probability forecasting and the coefficients of the model were estimated in Ogata et al. (1996) for the JMA hypocenter data (M≧4, 1926-1993). Currently, fifteen years has passed since the publication of the above-stated work so that we are able to present the performance and validation of the forecasts (1994-2009) by using the same model. Taking isolated events into consideration, the probability of the first events in a potential cluster being a foreshock vary in a range between 0+% and 10+% depending on their locations. This conditional forecasting performs significantly better than the unconditional (average) foreshock probability of 3.7% throughout Japan region. Furthermore, when we have the additional events in a cluster, the forecast probabilities range more widely from nearly 0% to
Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.
2015-10-01
Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Directory of Open Access Journals (Sweden)
Jun-He Yang
2017-01-01
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Directory of Open Access Journals (Sweden)
Xingsheng Gu
2013-03-01
Full Text Available he accurate forecasting of carbon dioxide (CO2 emissions from fossil fuel energy consumption is a key requirement for making energy policy and environmental strategy. In this paper, a novel quantum harmony search (QHS algorithm-based discounted mean square forecast error (DMSFE combination model is proposed. In the DMSFE combination forecasting model, almost all investigations assign the discounting factor (β arbitrarily since β varies between 0 and 1 and adopt one value for all individual models and forecasting periods. The original method doesn’t consider the influences of the individual model and the forecasting period. This work contributes by changing β from one value to a matrix taking the different model and the forecasting period into consideration and presenting a way of searching for the optimal β values by using the QHS algorithm through optimizing the mean absolute percent error (MAPE objective function. The QHS algorithm-based optimization DMSFE combination forecasting model is established and tested by forecasting CO2 emission of the World top‒5 CO2 emitters. The evaluation indexes such as MAPE, root mean squared error (RMSE and mean absolute error (MAE are employed to test the performance of the presented approach. The empirical analyses confirm the validity of the presented method and the forecasting accuracy can be increased in a certain degree.
Short-term load forecasting by a neuro-fuzzy based approach
Energy Technology Data Exchange (ETDEWEB)
Ruey-Hsun Liang; Ching-Chi Cheng [National Yunlin University of Science and Technology (China). Dept. of Electrical Engineering
2002-02-01
An approach based on an artificial neural network (ANN) combined with a fuzzy system is proposed for short-term load forecasting. This approach was developed in order to reach the desired short-term load forecasting in an efficient manner. Over the past few years, ANNs have attained the ability to manage a great deal of system complexity and are now being proposed as powerful computational tools. In order to select the appropriate load as the input for the desired forecasting, the Pearson analysis method is first applied to choose two historical record load patterns that are similar to the forecasted load pattern. These two load patterns and the required weather parameters are then fuzzified and input into a neural network for training or testing the network. The back-propagation (BP) neural network is applied to determine the preliminary forecasted load. In addition, the rule base for the fuzzy inference machine contains important linguistic membership function terms with knowledge in the form of fuzzy IF-THEN rules. This produces the load correction inference from the historical information and past forecasted load errors to obtain an inferred load error. Adding the inferred load error to the preliminary forecasted load, we can obtain the finial forecasted load. The effectiveness of the proposed approach to the short-term load-forecasting problem is demonstrated using practical data from the Taiwan Power Company (TPC). (Author)
Deep Learning Based Solar Flare Forecasting Model. I. Results for Line-of-sight Magnetograms
Huang, Xin; Wang, Huaning; Xu, Long; Liu, Jinfu; Li, Rong; Dai, Xinghua
2018-03-01
Solar flares originate from the release of the energy stored in the magnetic field of solar active regions, the triggering mechanism for these flares, however, remains unknown. For this reason, the conventional solar flare forecast is essentially based on the statistic relationship between solar flares and measures extracted from observational data. In the current work, the deep learning method is applied to set up the solar flare forecasting model, in which forecasting patterns can be learned from line-of-sight magnetograms of solar active regions. In order to obtain a large amount of observational data to train the forecasting model and test its performance, a data set is created from line-of-sight magnetogarms of active regions observed by SOHO/MDI and SDO/HMI from 1996 April to 2015 October and corresponding soft X-ray solar flares observed by GOES. The testing results of the forecasting model indicate that (1) the forecasting patterns can be automatically reached with the MDI data and they can also be applied to the HMI data; furthermore, these forecasting patterns are robust to the noise in the observational data; (2) the performance of the deep learning forecasting model is not sensitive to the given forecasting periods (6, 12, 24, or 48 hr); (3) the performance of the proposed forecasting model is comparable to that of the state-of-the-art flare forecasting models, even if the duration of the total magnetograms continuously spans 19.5 years. Case analyses demonstrate that the deep learning based solar flare forecasting model pays attention to areas with the magnetic polarity-inversion line or the strong magnetic field in magnetograms of active regions.
Forecasting Analysis of Shanghai Stock Index Based on ARIMA Model
Directory of Open Access Journals (Sweden)
Li Chenggang
2017-01-01
Full Text Available Prediction and analysis of the Shanghai Composite Index is conducive for investors to investing in the stock market, and providing investors with reference. This paper selects Shanghai Composite Index monthly closing price from Jan, 2005 to Oct, 2016 to construct ARIMA model. This paper carries on the forecast of the last three monthly closing price of Shanghai Stock Index that have occurred, and compared it with the actual value, which tests the accuracy and feasibility of the model in the short term Shanghai Stock Index forecast. At last, this paper uses the ARIMA model to forecast the Shanghai Composite Index closing price of the last two months in 2016.
Neural network based photovoltaic electrical forecasting in south Algeria
International Nuclear Information System (INIS)
Hamid Oudjana, S.; Hellal, A.; Hadj Mahammed, I
2014-01-01
Photovoltaic electrical forecasting is significance for the optimal operation and power predication of grid-connected photovoltaic (PV) plants, and it is important task in renewable energy electrical system planning and operating. This paper explores the application of neural networks (NN) to study the design of photovoltaic electrical forecasting systems for one week ahead using weather databases include the global irradiance, and temperature of Ghardaia city (south of Algeria) for one year of 2013 using a data acquisition system. Simulations were run and the results are discussed showing that neural networks Technique is capable to decrease the photovoltaic electrical forecasting error. (author)
Earthquake cycles and physical modeling of the process leading up to a large earthquake
Ohnaka, Mitiyasu
2004-08-01
A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.
Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.
2015-07-01
We present the application of interactive three-dimensional (3-D) visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 (THORPEX - North Atlantic Waveguide and Downstream Impact Experiment) campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts (WCBs) has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the European Centre for Medium Range Weather Forecasts (ECMWF) ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and grid spacing of the forecast wind field. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (3 to 7 days before take-off).
Directory of Open Access Journals (Sweden)
M. Rautenhaus
2015-07-01
Full Text Available We present the application of interactive three-dimensional (3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 (THORPEX – North Atlantic Waveguide and Downstream Impact Experiment campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts (WCBs has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the European Centre for Medium Range Weather Forecasts (ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and grid spacing of the forecast wind field. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (3 to 7 days before take-off.
Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?
Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.
2012-01-01
In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.
Factor-based forecasting in the presence of outliers
DEFF Research Database (Denmark)
Kristensen, Johannes Tang
2014-01-01
Macroeconomic forecasting using factor models estimated by principal components has become a popular research topic with many both theoretical and applied contributions in the literature. In this paper we attempt to address an often neglected issue in these models: The problem of outliers...... in the data. Most papers take an ad-hoc approach to this problem and simply screen datasets prior to estimation and remove anomalous observations. We investigate whether forecasting performance can be improved by using the original unscreened dataset and replacing principal components with a robust...... apply the estimator in a simulated real-time forecasting exercise to test its merits. We use a newly compiled dataset of US macroeconomic series spanning the period 1971:2–2012:10. Our findings suggest that the chosen treatment of outliers does affect forecasting performance and that in many cases...
Forecast-based Interventions Can Reduce the Health and Economic Burden of Wildfires
We simulated public health forecast-based interventions during a wildfire smoke episode in rural North Carolina to show the potential for use of modeled smoke forecasts toward reducing the health burden and showed a significant economic benefit of reducing exposures. Daily and co...
Short-Term Wind Power Forecasting Using the Enhanced Particle Swarm Optimization Based Hybrid Method
Directory of Open Access Journals (Sweden)
Wen-Yeau Chang
2013-09-01
Full Text Available High penetration of wind power in the electricity system provides many challenges to power system operators, mainly due to the unpredictability and variability of wind power generation. Although wind energy may not be dispatched, an accurate forecasting method of wind speed and power generation can help power system operators reduce the risk of an unreliable electricity supply. This paper proposes an enhanced particle swarm optimization (EPSO based hybrid forecasting method for short-term wind power forecasting. The hybrid forecasting method combines the persistence method, the back propagation neural network, and the radial basis function (RBF neural network. The EPSO algorithm is employed to optimize the weight coefficients in the hybrid forecasting method. To demonstrate the effectiveness of the proposed method, the method is tested on the practical information of wind power generation of a wind energy conversion system (WECS installed on the Taichung coast of Taiwan. Comparisons of forecasting performance are made with the individual forecasting methods. Good agreements between the realistic values and forecasting values are obtained; the test results show the proposed forecasting method is accurate and reliable.
Effect of the accuracy of price forecasting on profit in a Price Based Unit Commitment
International Nuclear Information System (INIS)
Delarue, Erik; Van Den Bosch, Pieterjan; D'haeseleer, William
2010-01-01
This paper discusses and quantifies the so-called loss of profit (i.e., the sub-optimality of profit) that can be expected in a Price Based Unit Commitment (PBUC), when incorrect price forecasts are used. For this purpose, a PBUC model has been developed and utilized, using Mixed Integer Linear Programming (MILP). Simulations are used to determine the relationship between the Mean Absolute Percentage Error (MAPE) of a certain price forecast and the loss of profit, for four different types of power plants. A Combined Cycle (CC) power plant and a pumped storage unit show highest sensitivity to incorrect forecasts. A price forecast with a MAPE of 15%, on average, yields 13.8% and 12.1% profit loss, respectively. A classic thermal power plant (coal fired) and cascade hydro unit are less affected by incorrect forecasts, with only 2.4% and 2.0% profit loss, respectively, at the same price forecast MAPE. This paper further demonstrates that if price forecasts show an average bias (upward or downward), using the MAPE as measure of the price forecast might not be sufficient to quantify profit loss properly. Profit loss in this case has been determined as a function of both shift and MAPE of the price forecast. (author)
Next-generation probabilistic seismicity forecasting
Energy Technology Data Exchange (ETDEWEB)
Hiemer, S.
2014-07-01
The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a
Next-generation probabilistic seismicity forecasting
International Nuclear Information System (INIS)
Hiemer, S.
2014-01-01
The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a
Data base and seismicity studies for Fagaras, Romania crustal earthquakes
International Nuclear Information System (INIS)
Moldovan, I.-A.; Enescu, B. D.; Pantea, A.; Constantin, A.; Bazacliu, O.; Malita, Z.; Moldoveanu, T.
2002-01-01
Besides the major impact of the Vrancea seismic region, one of the most important intermediate earthquake sources of Europe, the Romanian crustal earthquake sources, from Fagaras, Banat, Crisana, Bucovina or Dobrogea regions, have to be taken into consideration for seismicity studies or seismic hazard assessment. To determine the characteristics of the seismicity for Fagaras seismogenic region, a revised and updated catalogue of the Romanian earthquakes, recently compiled by Oncescu et al. (1999) is used. The catalogue contains 471 tectonic earthquakes and 338 induced earthquakes and is homogenous starting with 1471 for I>VIII and for I>VII starting with 1801. The catalogue is complete for magnitudes larger than 3 starting with 1982. In the studied zone only normal earthquakes occur, related to intracrustal fractures situated from 5 to 30 km depth. Most of them are of low energy, but once in a century a large destructive event occurs with epicentral intensity larger than VIII. The maximum expected magnitude is M GR = 6.5 and the epicenter distribution outlines significant clustering in the zones and on the lines mentioned in the tectonic studies. Taking into account the date of the last major earthquake (1916) and the return periods of severe damaging shocks of over 85 years it is to be expected very soon a large shock in the area. That's why a seismicity and hazard study for this zone is necessary. In the paper there are studied the b parameter variation (the mean value is 0.69), the activity value, the return periods, and seismicity maps and different histograms are plotted. At the same time there are excluded from the catalogue the explosions due to Campulung quarry. Because the catalogue contains the aftershocks for the 1916 earthquake for the seismicity studies we have excluded these shocks. (authors)
Research on forecast technology of mine gas emission based on fuzzy data mining (FDM)
Energy Technology Data Exchange (ETDEWEB)
Xu Chang-kai; Wang Yao-cai; Wang Jun-wei [CUMT, Xuzhou (China). School of Information and Electrical Engineering
2004-07-01
The safe production of coalmine can be further improved by forecasting the quantity of gas emission based on the real-time data and historical data which the gas monitoring system has saved. By making use of the advantages of data warehouse and data mining technology for processing large quantity of redundancy data, the method and its application of forecasting mine gas emission quantity based on FDM were studied. The constructing fuzzy resembling relation and clustering analysis were proposed, which the potential relationship inside the gas emission data may be found. The mode finds model and forecast model were presented, and the detailed approach to realize this forecast was also proposed, which have been applied to forecast the gas emission quantity efficiently.
Short-term data forecasting based on wavelet transformation and chaos theory
Wang, Yi; Li, Cunbin; Zhang, Liang
2017-09-01
A sketch of wavelet transformation and its application was given. Concerning the characteristics of time sequence, Haar wavelet was used to do data reduction. After processing, the effect of “data nail” on forecasting was reduced. Chaos theory was also introduced, a new chaos time series forecasting flow based on wavelet transformation was proposed. The largest Lyapunov exponent was larger than zero from small data sets, it verified the data change behavior still met chaotic behavior. Based on this, chaos time series to forecast short-term change behavior could be used. At last, the example analysis of the price from a real electricity market showed that the forecasting method increased the precision of the forecasting more effectively and steadily.
GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network
Raeesi, M.; Sadeghi-Niaraki, A.
2013-09-01
Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
International Nuclear Information System (INIS)
Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan
2014-01-01
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M o ), moment magnitude (M W ), rupture duration (T o ) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M W =7.8 and the 17 July 2006 Pangandaran earthquake with M W =7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M W =7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
Energy Technology Data Exchange (ETDEWEB)
Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com [Badan Meteorologi Klimatologi Geofisika, Jl Angkasa I No. 2 Jakarta (Indonesia); Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan [Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)
2014-03-24
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.
GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK
Directory of Open Access Journals (Sweden)
M. Raeesi
2013-09-01
Full Text Available Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post–earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post–earthquake crisis.
Bleier, T.; Heraud, J. A.; Dunson, J. C.
2015-12-01
QuakeFinder (QF) and its international collaborators have installed and currently maintain 165 three-axis induction magnetometer instrument sites in California, Peru, Taiwan, Greece, Chile and Sumatra. The data from these instruments are being analyzed for pre-quake signatures. This analysis consists of both private research by QuakeFinder, and institutional collaborators (PUCP in Peru, NCU in Taiwan, PUCC in Chile, NOA in Greece, Syiah Kuala University in Indonesia, LASP at U of Colo., Stanford, and USGS). Recently, NASA Hq and QuakeFinder tried a new approach to help with the analysis of this huge (50+TB) data archive. A collaboration with Apirio/TopCoder, Harvard University, Amazon, QuakeFinder, and NASA Hq. resulted in an open algorithm development contest called "Quest for Quakes" in which contestants (freelance algorithm developers) attempted to identify quakes from a subset of the QuakeFinder data (3TB). The contest included a $25K prize pool, and contained 100 cases where earthquakes (and null sets) included data from up to 5 remote sites, near and far from quakes greater than M4. These data sets were made available through Amazon.com to hundreds of contestants over a two week contest period. In a more traditional approach, several new algorithms were tried by actively sharing the QF data with universities over a longer period. These algorithms included Principal Component Analysis-PCA and deep neural networks in an effort to automatically identify earthquake signals within typical, noise-filled environments. This presentation examines the pros and cons of employing these two approaches, from both logistical and scientific perspectives.
Short-term solar irradiation forecasting based on Dynamic Harmonic Regression
International Nuclear Information System (INIS)
Trapero, Juan R.; Kourentzes, Nikolaos; Martin, A.
2015-01-01
Solar power generation is a crucial research area for countries that have high dependency on fossil energy sources and is gaining prominence with the current shift to renewable sources of energy. In order to integrate the electricity generated by solar energy into the grid, solar irradiation must be reasonably well forecasted, where deviations of the forecasted value from the actual measured value involve significant costs. The present paper proposes a univariate Dynamic Harmonic Regression model set up in a State Space framework for short-term (1–24 h) solar irradiation forecasting. Time series hourly aggregated as the Global Horizontal Irradiation and the Direct Normal Irradiation will be used to illustrate the proposed approach. This method provides a fast automatic identification and estimation procedure based on the frequency domain. Furthermore, the recursive algorithms applied offer adaptive predictions. The good forecasting performance is illustrated with solar irradiance measurements collected from ground-based weather stations located in Spain. The results show that the Dynamic Harmonic Regression achieves the lowest relative Root Mean Squared Error; about 30% and 47% for the Global and Direct irradiation components, respectively, for a forecast horizon of 24 h ahead. - Highlights: • Solar irradiation forecasts at short-term are required to operate solar power plants. • This paper assesses the Dynamic Harmonic Regression to forecast solar irradiation. • Models are evaluated using hourly GHI and DNI data collected in Spain. • The results show that forecasting accuracy is improved by using the model proposed
A new cascade NN based method to short-term load forecast in deregulated electricity market
International Nuclear Information System (INIS)
Kouhi, Sajjad; Keynia, Farshid
2013-01-01
Highlights: • We are proposed a new hybrid cascaded NN based method and WT to short-term load forecast in deregulated electricity market. • An efficient preprocessor consist of normalization and shuffling of signals is presented. • In order to select the best inputs, a two-stage feature selection is presented. • A new cascaded structure consist of three cascaded NNs is used as forecaster. - Abstract: Short-term load forecasting (STLF) is a major discussion in efficient operation of power systems. The electricity load is a nonlinear signal with time dependent behavior. The area of electricity load forecasting has still essential need for more accurate and stable load forecast algorithm. To improve the accuracy of prediction, a new hybrid forecast strategy based on cascaded neural network is proposed for STLF. This method is consists of wavelet transform, an intelligent two-stage feature selection, and cascaded neural network. The feature selection is used to remove the irrelevant and redundant inputs. The forecast engine is composed of three cascaded neural network (CNN) structure. This cascaded structure can be efficiently extract input/output mapping function of the nonlinear electricity load data. Adjustable parameters of the intelligent feature selection and CNN is fine-tuned by a kind of cross-validation technique. The proposed STLF is tested on PJM and New York electricity markets. It is concluded from the result, the proposed algorithm is a robust forecast method
International Nuclear Information System (INIS)
Haiyang, Yu; Yanmei, Liu; Guijun, Yang; Xiaodong, Yang; Chenwei, Nie; Dong, Ren
2014-01-01
To achieve dynamic winter wheat quality monitoring and forecasting in larger scale regions, the objective of this study was to design and develop a winter wheat quality monitoring and forecasting system by using a remote sensing index and environmental factors. The winter wheat quality trend was forecasted before the harvest and quality was monitored after the harvest, respectively. The traditional quality-vegetation index from remote sensing monitoring and forecasting models were improved. Combining with latitude information, the vegetation index was used to estimate agronomy parameters which were related with winter wheat quality in the early stages for forecasting the quality trend. A combination of rainfall in May, temperature in May, illumination at later May, the soil available nitrogen content and other environmental factors established the quality monitoring model. Compared with a simple quality-vegetation index, the remote sensing monitoring and forecasting model used in this system get greatly improved accuracy. Winter wheat quality was monitored and forecasted based on the above models, and this system was completed based on WebGIS technology. Finally, in 2010 the operation process of winter wheat quality monitoring system was presented in Beijing, the monitoring and forecasting results was outputted as thematic maps
Analytical investigations of the earthquake resistance of the support base of an oil-gas platform
Energy Technology Data Exchange (ETDEWEB)
Glagovskii, V. B.; Kassirova, N. A.; Turchina, O. A.; Finagenov, O. M.; Tsirukhin, N. A. [JSC ' VNIIG im. B. E. Vedeneeva' (Russian Federation)
2012-01-15
In designing stationary oil-gas recovery platforms on the continental shelf, the need arises to compute the estimated strength of their support base during seismic events. This paper is devoted to this estimation. The paper examines a structure consisting of the superstructure of an oil-gas platform and its gravity-type base. It is possible to install earthquake-insulating supports between them. Calculations performed for the design earthquake indicated that the design of the gravity base can resist a seismic effect without special additional measures. During the maximum design earthquake, moreover, significant stresses may develop in the zone of base where the columns are connected to the upper slab of the caisson. In that case, the earthquake insulation considered for the top of the platform becomes critical.
Analytical investigations of the earthquake resistance of the support base of an oil-gas platform
International Nuclear Information System (INIS)
Glagovskii, V. B.; Kassirova, N. A.; Turchina, O. A.; Finagenov, O. M.; Tsirukhin, N. A.
2012-01-01
In designing stationary oil-gas recovery platforms on the continental shelf, the need arises to compute the estimated strength of their support base during seismic events. This paper is devoted to this estimation. The paper examines a structure consisting of the superstructure of an oil-gas platform and its gravity-type base. It is possible to install earthquake-insulating supports between them. Calculations performed for the design earthquake indicated that the design of the gravity base can resist a seismic effect without special additional measures. During the maximum design earthquake, moreover, significant stresses may develop in the zone of base where the columns are connected to the upper slab of the caisson. In that case, the earthquake insulation considered for the top of the platform becomes critical.
Ionospheric scintillation forecasting model based on NN-PSO technique
Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.
2017-09-01
The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.
Insulator Contamination Forecasting Based on Fractal Analysis of Leakage Current
Directory of Open Access Journals (Sweden)
Bing Luo
2012-07-01
Full Text Available In this paper, an artificial pollution test is carried out to study the leakage current of porcelain insulators. Fractal theory is adopted to extract the characteristics hidden in leakage current waveforms. Fractal dimensions of the leakage current for the security, forecast and danger zones are analyzed under four types of degrees of contamination. The mean value and the standard deviation of the fractal dimension in the forecast zone are calculated to characterize the differences. The analysis reveals large differences in the fractal dimension of leakage current under different contamination discharge stages and degrees. The experimental and calculation results suggest that the fractal dimension of a leakage current waveform can be used as a new indicator of the discharge process and contamination degree of insulators. The results provide new methods and valid indicators for forecasting contamination flashovers.
Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song
2018-01-01
Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring
Earthquake acceleration amplification based on single microtremor test
Jaya Syahbana, Arifan; Kurniawan, Rahmat; Soebowo, Eko
2018-02-01
Understanding soil dynamics is needed to understand soil behaviour, including the parameters of earthquake acceleration amplification. Many researchers now conduct single microtremor tests to obtain amplification of velocity and natural periods of soil at test sites. However, these amplification parameters are rarely used, so a method is needed to convert the velocity amplification to acceleration amplification. This paper will discuss the proposed process of changing the value of amplification. The proposed method is to integrate the time histories of the synthetic earthquake acceleration of the soil surface under the deaggregation at that location so the time histories of the velocity earthquake will be obtained. Next is to conduct a “fitting curve” between amplification by a single microtremor test with amplification of the synthetic earthquake velocity time histories. After obtaining the fitting curve time histories of velocity, differentiation will be conducted to obtain fitting curve acceleration time histories. The final step after obtaining the fitting curve is to compare the acceleration of the “fitting curve” against the histories time of the acceleration of synthetic earthquake at bedrocks to obtain single microtremor acceleration amplification factor.
Analysis of the Earthquake Impact towards water-based fire extinguishing system
Lee, J.; Hur, M.; Lee, K.
2015-09-01
Recently, extinguishing system installed in the building when the earthquake occurred at a separate performance requirements. Before the building collapsed during the earthquake, as a function to maintain a fire extinguishing. In particular, the automatic sprinkler fire extinguishing equipment, such as after a massive earthquake without damage to piping also must maintain confidentiality. In this study, an experiment installed in the building during the earthquake, the water-based fire extinguishing saw grasp the impact of the pipe. Experimental structures for water-based fire extinguishing seismic construction step by step, and then applied to the seismic experiment, the building appears in the extinguishing of the earthquake response of the pipe was measured. Construction of acceleration caused by vibration being added to the size and the size of the displacement is measured and compared with the data response of the pipe from the table, thereby extinguishing water piping need to enhance the seismic analysis. Define the seismic design category (SDC) for the four groups in the building structure with seismic criteria (KBC2009) designed according to the importance of the group and earthquake seismic intensity. The event of a real earthquake seismic analysis of Category A and Category B for the seismic design of buildings, the current fire-fighting facilities could have also determined that the seismic performance. In the case of seismic design categories C and D are installed in buildings to preserve the function of extinguishing the required level of seismic retrofit design is determined.
Ratio-based lengths of intervals to improve fuzzy time series forecasting.
Huarng, Kunhuang; Yu, Tiffany Hui-Kuang
2006-04-01
The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.
MOS BASED FORECAST OF 6-HOURLY AREA PRECIPITATION
Czech Academy of Sciences Publication Activity Database
Sokol, Zbyněk
2006-01-01
Roč. 50, č. 1 (2006), s. 105-120 ISSN 0039-3169 R&D Projects: GA AV ČR IBS3042101 Institutional research plan: CEZ:AV0Z30420517 Keywords : precipitation forecast * regression * statistical postprocessing * MOS Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.603, year: 2006
Time series forecasting based on deep extreme learning machine
Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan
2017-01-01
Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in
Operational flash flood forecasting platform based on grid technology
Thierion, V.; Ayral, P.-A.; Angelini, V.; Sauvagnargues-Lesage, S.; Nativi, S.; Payrastre, O.
2009-04-01
Flash flood events of south of France such as the 8th and 9th September 2002 in the Grand Delta territory caused important economic and human damages. Further to this catastrophic hydrological situation, a reform of flood warning services have been initiated (set in 2006). Thus, this political reform has transformed the 52 existing flood warning services (SAC) in 22 flood forecasting services (SPC), in assigning them territories more hydrological consistent and new effective hydrological forecasting mission. Furthermore, national central service (SCHAPI) has been created to ease this transformation and support local services in their new objectives. New functioning requirements have been identified: - SPC and SCHAPI carry the responsibility to clearly disseminate to public organisms, civil protection actors and population, crucial hydrologic information to better anticipate potential dramatic flood event, - a new effective hydrological forecasting mission to these flood forecasting services seems essential particularly for the flash floods phenomenon. Thus, models improvement and optimization was one of the most critical requirements. Initially dedicated to support forecaster in their monitoring mission, thanks to measuring stations and rainfall radar images analysis, hydrological models have to become more efficient in their capacity to anticipate hydrological situation. Understanding natural phenomenon occuring during flash floods mainly leads present hydrological research. Rather than trying to explain such complex processes, the presented research try to manage the well-known need of computational power and data storage capacities of these services. Since few years, Grid technology appears as a technological revolution in high performance computing (HPC) allowing large-scale resource sharing, computational power using and supporting collaboration across networks. Nowadays, EGEE (Enabling Grids for E-science in Europe) project represents the most important
A functional assay-based strategy for nanomaterial risk forecasting
Energy Technology Data Exchange (ETDEWEB)
Hendren, Christine Ogilvie, E-mail: christine.hendren@duke.edu [Center for the Environmental Implications of NanoTechnology, Duke University, Durham, NC 27708 (United States); Lowry, Gregory V., E-mail: glowry@andrew.cmu.edu [Center for the Environmental Implications of NanoTechnology, Duke University, Durham, NC 27708 (United States); Department of Civil and Environmental Engineering, Carnegie Mellon University, 119 Porter Hall, Pittsburgh, PA 15213 (United States); Unrine, Jason M., E-mail: jason.unrine@uky.edu [Center for the Environmental Implications of NanoTechnology, Duke University, Durham, NC 27708 (United States); Department of Plant and Soil Sciences, University of Kentucky, Agricultural Science Center, Lexington, KY 40546 (United States); Wiesner, Mark R., E-mail: wiesner@duke.edu [Center for the Environmental Implications of NanoTechnology, Duke University, Durham, NC 27708 (United States); Department of Civil and Environmental Engineering, Duke University, 121 Hudson Hall PO Box 90287, Durham, NC 27708 (United States)
2015-12-01
The study of nanomaterial impacts on environment, health and safety (nanoEHS) has been largely predicated on the assumption that exposure and hazard can be predicted from physical–chemical properties of nanomaterials. This approach is rooted in the view that nanoöbjects essentially resemble chemicals with additional particle-based attributes that must be included among their intrinsic physical–chemical descriptors. With the exception of the trivial case of nanomaterials made from toxic or highly reactive materials, this approach has yielded few actionable guidelines for predicting nanomaterial risk. This article addresses inherent problems in structuring a nanoEHS research strategy based on the goal of predicting outcomes directly from nanomaterial properties, and proposes a framework for organizing data and designing integrated experiments based on functional assays (FAs). FAs are intermediary, semi-empirical measures of processes or functions within a specified system that bridge the gap between nanomaterial properties and potential outcomes in complex systems. The three components of a functional assay are standardized protocols for parameter determination and reporting, a theoretical context for parameter application and reference systems. We propose the identification and adoption of reference systems where FAs may be applied to provide parameter estimates for environmental fate and effects models, as well as benchmarks for comparing the results of FAs and experiments conducted in more complex and varied systems. Surface affinity and dissolution rate are identified as two critical FAs for characterizing nanomaterial behavior in a variety of important systems. The use of these FAs to predict bioaccumulation and toxicity for initial and aged nanomaterials is illustrated for the case of silver nanoparticles and Caenorhabditis elegans. - Highlights: • Approaches to predict risk directly from nanomaterial (NM) properties are problematic. • We propose
A functional assay-based strategy for nanomaterial risk forecasting
International Nuclear Information System (INIS)
Hendren, Christine Ogilvie; Lowry, Gregory V.; Unrine, Jason M.; Wiesner, Mark R.
2015-01-01
The study of nanomaterial impacts on environment, health and safety (nanoEHS) has been largely predicated on the assumption that exposure and hazard can be predicted from physical–chemical properties of nanomaterials. This approach is rooted in the view that nanoöbjects essentially resemble chemicals with additional particle-based attributes that must be included among their intrinsic physical–chemical descriptors. With the exception of the trivial case of nanomaterials made from toxic or highly reactive materials, this approach has yielded few actionable guidelines for predicting nanomaterial risk. This article addresses inherent problems in structuring a nanoEHS research strategy based on the goal of predicting outcomes directly from nanomaterial properties, and proposes a framework for organizing data and designing integrated experiments based on functional assays (FAs). FAs are intermediary, semi-empirical measures of processes or functions within a specified system that bridge the gap between nanomaterial properties and potential outcomes in complex systems. The three components of a functional assay are standardized protocols for parameter determination and reporting, a theoretical context for parameter application and reference systems. We propose the identification and adoption of reference systems where FAs may be applied to provide parameter estimates for environmental fate and effects models, as well as benchmarks for comparing the results of FAs and experiments conducted in more complex and varied systems. Surface affinity and dissolution rate are identified as two critical FAs for characterizing nanomaterial behavior in a variety of important systems. The use of these FAs to predict bioaccumulation and toxicity for initial and aged nanomaterials is illustrated for the case of silver nanoparticles and Caenorhabditis elegans. - Highlights: • Approaches to predict risk directly from nanomaterial (NM) properties are problematic. • We propose
A Condition Based Maintenance Approach to Forecasting B-1 Aircraft Parts
2017-03-23
Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-23-2017 A Condition Based Maintenance Approach to Forecasting B-1 Aircraft...component’s life history where reliability forecasts could be stipulated based on a component’s current condition . One of the major issues their report noted...Engine Condition Monitoring System Specification. Contract Number DOT-CG-80513-A. Grand Prairie, TX. Air Force Materiel Command. (2011) Requirements For
DEFF Research Database (Denmark)
Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas
2012-01-01
Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....
Development of optimization-based probabilistic earthquake scenarios for the city of Tehran
Zolfaghari, M. R.; Peyghaleh, E.
2016-01-01
This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less
Neural Networks-Based Forecasting Regarding the Convergence Process of CEE Countries to the Eurozone
Directory of Open Access Journals (Sweden)
Magdalena RĂDULESCU
2014-06-01
Full Text Available In the crisis frame, many forecasts failed to provide well determined ratios. What we tried to explain in this paper is how some selected Central and Eastern European countries will perform in the near future: Romania, Bulgaria, Hungary, Poland and Czech Republic, using neural networks- based forecasting model which we created for the nominal and real convergence ratios. As a methodology, we propose the forecasting based on artificial neural network (ANN, using the well-known software tool GMDH Shell. For each output variable, we obtain a forecast model, according to previous values and other input related variables, and we applied the model to all countries. Our forecasts are much closer to the partial results of 2013 in the analyzed countries than the European Commission’s or other international organizations’ forecasts. The results of the forecast are important both for governments to design their financial strategies and for the investors in these selected countries. According to our results, the Czech Republic seems to be closer to achieve its nominal convergence in the next two years, but it faces great difficulties in the real convergence area, because it did not overpass the recession.
GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting
Directory of Open Access Journals (Sweden)
Lintao Yang
2018-01-01
Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.
Short-Term State Forecasting-Based Optimal Voltage Regulation in Distribution Systems: Preprint
Energy Technology Data Exchange (ETDEWEB)
Yang, Rui; Jiang, Huaiguang; Zhang, Yingchen
2017-05-17
A novel short-term state forecasting-based optimal power flow (OPF) approach for distribution system voltage regulation is proposed in this paper. An extreme learning machine (ELM) based state forecaster is developed to accurately predict system states (voltage magnitudes and angles) in the near future. Based on the forecast system states, a dynamically weighted three-phase AC OPF problem is formulated to minimize the voltage violations with higher penalization on buses which are forecast to have higher voltage violations in the near future. By solving the proposed OPF problem, the controllable resources in the system are optimally coordinated to alleviate the potential severe voltage violations and improve the overall voltage profile. The proposed approach has been tested in a 12-bus distribution system and simulation results are presented to demonstrate the performance of the proposed approach.
Rethinking earthquake-related DC-ULF electromagnetic phenomena: towards a physics-based approach
Directory of Open Access Journals (Sweden)
Q. Huang
2011-11-01
Full Text Available Numerous electromagnetic changes possibly related with earthquakes have been independently reported and have even been attempted to apply to short-term prediction of earthquakes. However, there are active debates on the above issue because the seismogenic process is rather complicated and the studies have been mainly empirical (i.e. a kind of experience-based approach. Thus, a physics-based study would be helpful for understanding earthquake-related electromagnetic phenomena and strengthening their applications. As a potential physics-based approach, I present an integrated research scheme, taking into account the interaction among observation, methodology, and physical model. For simplicity, this work focuses only on the earthquake-related DC-ULF electromagnetic phenomena. The main approach includes the following key problems: (1 how to perform a reliable and appropriate observation with some clear physical quantities; (2 how to develop a robust methodology to reveal weak earthquake-related electromagnetic signals from noisy background; and (3 how to develop plausible physical models based on theoretical analyses and/or laboratory experiments for the explanation of the earthquake-related electromagnetic signals observed in the field conditions.
Radar Based Flow and Water Level Forecasting in Sewer Systems:a danisk case study
Thorndahl, Søren; Rasmussen, Michael R.; Grum, M.; Neve, S. L.
2009-01-01
This paper describes the first radar based forecast of flow and/or water level in sewer systems in Denmark. The rainfall is successfully forecasted with a lead time of 1-2 hours, and flow/levels are forecasted an additional ½-1½ hours using models describing the behaviour of the sewer system. Both radar data and flow/water level model are continuously updated using online rain gauges and online in-sewer measurements, in order to make the best possible predictions. The project show very promis...
Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.
2018-03-01
This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.
Recognition of underground nuclear explosion and natural earthquake based on neural network
International Nuclear Information System (INIS)
Yang Hong; Jia Weimin
2000-01-01
Many features are extracted to improve the identified rate and reliability of underground nuclear explosion and natural earthquake. But how to synthesize these characters is the key of pattern recognition. Based on the improved Delta algorithm, features of underground nuclear explosion and natural earthquake are inputted into BP neural network, and friendship functions are constructed to identify the output values. The identified rate is up to 92.0%, which shows that: the way is feasible
Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev
2016-03-01
Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Directory of Open Access Journals (Sweden)
T. Schmidt
2016-03-01
Full Text Available Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1–2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Directory of Open Access Journals (Sweden)
E. Ulutas
2012-06-01
Full Text Available This study analyzes the response of the Global Disasters Alerts and Coordination System (GDACS in relation to a case study: the Kepulaunan Mentawai earthquake and related tsunami, which occurred on 25 October 2010. The GDACS, developed by the European Commission Joint Research Center, combines existing web-based disaster information management systems with the aim to alert the international community in case of major disasters. The tsunami simulation system is an integral part of the GDACS. In more detail, the study aims to assess the tsunami hazard on the Mentawai and Sumatra coasts: the tsunami heights and arrival times have been estimated employing three propagation models based on the long wave theory. The analysis was performed in three stages: (1 pre-calculated simulations by using the tsunami scenario database for that region, used by the GDACS system to estimate the alert level; (2 near-real-time simulated tsunami forecasts, automatically performed by the GDACS system whenever a new earthquake is detected by the seismological data providers; and (3 post-event tsunami calculations using GCMT (Global Centroid Moment Tensor fault mechanism solutions proposed by US Geological Survey (USGS for this event. The GDACS system estimates the alert level based on the first type of calculations and on that basis sends alert messages to its users; the second type of calculations is available within 30–40 min after the notification of the event but does not change the estimated alert level. The third type of calculations is performed to improve the initial estimations and to have a better understanding of the extent of the possible damage. The automatic alert level for the earthquake was given between Green and Orange Alert, which, in the logic of GDACS, means no need or moderate need of international humanitarian assistance; however, the earthquake generated 3 to 9 m tsunami run-up along southwestern coasts of the Pagai Islands where 431 people died
Multitask Learning-Based Security Event Forecast Methods for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Hui He
2016-01-01
Full Text Available Wireless sensor networks have strong dynamics and uncertainty, including network topological changes, node disappearance or addition, and facing various threats. First, to strengthen the detection adaptability of wireless sensor networks to various security attacks, a region similarity multitask-based security event forecast method for wireless sensor networks is proposed. This method performs topology partitioning on a large-scale sensor network and calculates the similarity degree among regional subnetworks. The trend of unknown network security events can be predicted through multitask learning of the occurrence and transmission characteristics of known network security events. Second, in case of lacking regional data, the quantitative trend of unknown regional network security events can be calculated. This study introduces a sensor network security event forecast method named Prediction Network Security Incomplete Unmarked Data (PNSIUD method to forecast missing attack data in the target region according to the known partial data in similar regions. Experimental results indicate that for an unknown security event forecast the forecast accuracy and effects of the similarity forecast algorithm are better than those of single-task learning method. At the same time, the forecast accuracy of the PNSIUD method is better than that of the traditional support vector machine method.
Deterministic Echo State Networks Based Stock Price Forecasting
Directory of Open Access Journals (Sweden)
Jingpei Dan
2014-01-01
Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.
A travel time forecasting model based on change-point detection method
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning
Directory of Open Access Journals (Sweden)
Ya’nan Wang
2016-01-01
Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.
Stock prices forecasting based on wavelet neural networks with PSO
Wang Kai-Cheng; Yang Chi-I; Chang Kuei-Fang
2017-01-01
This research examines the forecasting performance of wavelet neural network (WNN) model using published stock data obtained from Financial Times Stock Exchange (FTSE) Taiwan Stock Exchange (TWSE) 50 index, also known as Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), hereinafter referred to as Taiwan 50. Our WNN model uses particle swarm optimization (PSO) to choose the appropriate initial network values for different companies. The findings come with two advantages. First...
Housing Value Forecasting Based on Machine Learning Methods
Mu, Jingyi; Wu, Fang; Zhang, Aihua
2014-01-01
In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...
Housing Value Forecasting Based on Machine Learning Methods
Directory of Open Access Journals (Sweden)
Jingyi Mu
2014-01-01
Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.
Stock price forecasting based on time series analysis
Chi, Wan Le
2018-05-01
Using the historical stock price data to set up a sequence model to explain the intrinsic relationship of data, the future stock price can forecasted. The used models are auto-regressive model, moving-average model and autoregressive-movingaverage model. The original data sequence of unit root test was used to judge whether the original data sequence was stationary. The non-stationary original sequence as a first order difference needed further processing. Then the stability of the sequence difference was re-inspected. If it is still non-stationary, the second order differential processing of the sequence is carried out. Autocorrelation diagram and partial correlation diagram were used to evaluate the parameters of the identified ARMA model, including coefficients of the model and model order. Finally, the model was used to forecast the fitting of the shanghai composite index daily closing price with precision. Results showed that the non-stationary original data series was stationary after the second order difference. The forecast value of shanghai composite index daily closing price was closer to actual value, indicating that the ARMA model in the paper was a certain accuracy.
Seasonal Forecasting of Fire Weather Based on a New Global Fire Weather Database
Dowdy, Andrew J.; Field, Robert D.; Spessa, Allan C.
2016-01-01
Seasonal forecasting of fire weather is examined based on a recently produced global database of the Fire Weather Index (FWI) system beginning in 1980. Seasonal average values of the FWI are examined in relation to measures of the El Nino-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD). The results are used to examine seasonal forecasts of fire weather conditions throughout the world.
Short-Term Wind Power Forecasting Using the Enhanced Particle Swarm Optimization Based Hybrid Method
Wen-Yeau Chang
2013-01-01
High penetration of wind power in the electricity system provides many challenges to power system operators, mainly due to the unpredictability and variability of wind power generation. Although wind energy may not be dispatched, an accurate forecasting method of wind speed and power generation can help power system operators reduce the risk of an unreliable electricity supply. This paper proposes an enhanced particle swarm optimization (EPSO) based hybrid forecasting method for short-term wi...
Cyclone track forecasting based on satellite images using artificial neural networks
Kovordanyi, Rita; Roy, Chandan
2009-01-01
Many places around the world are exposed to tropical cyclones and associated storm surges. In spite of massive efforts, a great number of people die each year as a result of cyclone events. To mitigate this damage, improved forecasting techniques must be developed. The technique presented here uses artificial neural networks to interpret NOAA-AVHRR satellite images. A multi-layer neural network, resembling the human visual system, was trained to forecast the movement of cyclones based on sate...
Sergey Krylov
2012-01-01
The article treats a new methodological approach to the company cash flows target-oriented forecasting based on its financial position analysis. The approach is featured to be universal and presumes application of the following techniques developed by the author: financial ratio values correction techniques and correcting cash flows techniques. The financial ratio values correction technique assumes to analyze and forecast company financial position while the correcting cash flows technique i...
Lambda-Based Data Processing Architecture for Two-Level Load Forecasting in Residential Buildings
Directory of Open Access Journals (Sweden)
Gde Dharma Nugraha
2018-03-01
Full Text Available Building energy management systems (BEMS have been intensively used to manage the electricity consumption of residential buildings more efficiently. However, the dynamic behavior of the occupants introduces uncertainty problems that affect the performance of the BEMS. To address this uncertainty problem, the BEMS may implement load forecasting as one of the BEMS modules. Load forecasting utilizes historical load data to compute model predictions for a specific time in the future. Recently, smart meters have been introduced to collect electricity consumption data. Smart meters not only capture aggregation data, but also individual data that is more frequently close to real-time. The processing of both smart meter data types for load forecasting can enhance the performance of the BEMS when confronted with uncertainty problems. The collection of smart meter data can be processed using a batch approach for short-term load forecasting, while the real-time smart meter data can be processed for very short-term load forecasting, which adjusts the short-term load forecasting to adapt to the dynamic behavior of the occupants. This approach requires different data processing techniques for aggregation and individual of smart meter data. In this paper, we propose Lambda-based data processing architecture to process the different types of smart meter data and implement the two-level load forecasting approach, which combines short-term and very short-term load forecasting techniques on top of our proposed data processing architecture. The proposed approach is expected to enhance the BEMS to address the uncertainty problem in order to process data in less time. Our experiment showed that the proposed approaches improved the accuracy by 7% compared to a typical BEMS with only one load forecasting technique, and had the lowest computation time when processing the smart meter data.
International Nuclear Information System (INIS)
Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted
2017-01-01
This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.
Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint
Energy Technology Data Exchange (ETDEWEB)
Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-07-26
In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution system operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.
Jin, Sainan; Corradi, Valentina; Swanson, Norman
2015-01-01
Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. This paper addresses this issue by using a novel criterion for forecast evaluation which is based on the entire distribution of forecast errors. We introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority, and we establish a ...
Directory of Open Access Journals (Sweden)
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
Earthquake: Game-based learning for 21st century STEM education
Perkins, Abigail Christine
To play is to learn. A lack of empirical research within game-based learning literature, however, has hindered educational stakeholders to make informed decisions about game-based learning for 21st century STEM education. In this study, I modified a research and development (R&D) process to create a collaborative-competitive educational board game illuminating elements of earthquake engineering. I oriented instruction- and game-design principles around 21st century science education to adapt the R&D process to develop the educational game, Earthquake. As part of the R&D, I evaluated Earthquake for empirical evidence to support the claim that game-play results in student gains in critical thinking, scientific argumentation, metacognitive abilities, and earthquake engineering content knowledge. I developed Earthquake with the aid of eight focus groups with varying levels of expertise in science education research, teaching, administration, and game-design. After developing a functional prototype, I pilot-tested Earthquake with teacher-participants (n=14) who engaged in semi-structured interviews after their game-play. I analyzed teacher interviews with constant comparison methodology. I used teachers' comments and feedback from content knowledge experts to integrate game modifications, implementing results to improve Earthquake. I added player roles, simplified phrasing on cards, and produced an introductory video. I then administered the modified Earthquake game to two groups of high school student-participants (n = 6), who played twice. To seek evidence documenting support for my knowledge claim, I analyzed videotapes of students' game-play using a game-based learning checklist. My assessment of learning gains revealed increases in all categories of students' performance: critical thinking, metacognition, scientific argumentation, and earthquake engineering content knowledge acquisition. Players in both student-groups improved mostly in critical thinking, having
Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration
Energy Technology Data Exchange (ETDEWEB)
Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-23
In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operator can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.
Spatial Forecast of Landslides in Three Gorges Based On Spatial Data Mining
Directory of Open Access Journals (Sweden)
Xianmin Wang
2009-03-01
Full Text Available The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc. China-Brazil Earth Resources Satellite (Cbers images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods.
Spatial forecast of landslides in three gorges based on spatial data mining.
Wang, Xianmin; Niu, Ruiqing
2009-01-01
The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc). China-Brazil Earth Resources Satellite (Cbers) images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County) in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods.
Frequency spectrum method-based stress analysis for oil pipelines in earthquake disaster areas.
Directory of Open Access Journals (Sweden)
Xiaonan Wu
Full Text Available When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline.
Frequency spectrum method-based stress analysis for oil pipelines in earthquake disaster areas.
Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao
2015-01-01
When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline.
Directory of Open Access Journals (Sweden)
Suryanita Reni
2017-01-01
Full Text Available The strong motion earthquake could cause the building damage in case of the building not considered in the earthquake design of the building. The study aims to predict the damage-level of building due to earthquake using Artificial Neural Networks method. The building model is a reinforced concrete building with ten floors and height between floors is 3.6 m. The model building received a load of the earthquake based on nine earthquake time history records. Each time history scaled to 0,5g, 0,75g, and 1,0g. The Artificial Neural Networks are designed in 4 architectural models using the MATLAB program. Model 1 used the displacement, velocity, and acceleration as input and Model 2 used the displacement only as the input. Model 3 used the velocity as input, and Model 4 used the acceleration just as input. The output of the Neural Networks is the damage level of the building with the category of Safe (1, Immediate Occupancy (2, Life Safety (3 or in a condition of Collapse Prevention (4. According to the results, Neural Network models have the prediction rate of the damage level between 85%-95%. Therefore, one of the solutions for analyzing the structural responses and the damage level promptly and efficiently when the earthquake occurred is by using Artificial Neural Network
Multiscale seismicity analysis and forecasting: examples from the Western Pacific and Iceland
International Nuclear Information System (INIS)
Eberhard, A. J.
2014-01-01
Seismicity forecasting has three major challenges: (1) the development of models or algorithms to forecast of seismicity, (2) the evaluation and testing of the forecasts and (3) the need for an appropriate software package. The goal of my thesis is to improve seismicity forecasting. I do this by contributing to a solution for all of the three challenges, using data from the West Pacific and from Iceland. The thesis is split into three chapters, each focusing on one of the challenges and each chapter will be (or already is) published in a peer-reviewed scientific journal. This first chapter (chapter 2 in this thesis) is about testing of seismic forecasts. The Collaboratory for the Study of Earthquake Predictability (CSEP, www.cseptesting.org; Jordan, 2006) has been conducting an earthquake forecast experiment in the western Pacific. The data and forecasts of this experiment serve as example for this chapter. For the three participating statistical models, I analyze the first four years of this experiment. I use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes, and I apply measures based on Student’s t-test and the Wilcoxon signed-rank test to compare the forecasts. I estimate also the uncertainties of the test results resulting from uncertainties in earthquake location and seismic moment. The estimated uncertainties are relatively small and suggest that the evaluation metrics are relatively robust. In chapter 3 I focus on the forecast itself with an Operational Earthquake Forecasting (OEF) experiment–forecasting seismicity in near real-time–for Iceland. The goal is to estimate the feasibility of OEF with the state-of-the-art models. I use the period between 2007 and 2010 and a sparsely processed catalogue as basis for a retrospective forecasting experiment with next-day forecasts. To better understand the effect of updating cycle on the probability gain I also use scheme in which the forecasts are
Multiscale seismicity analysis and forecasting: examples from the Western Pacific and Iceland
Energy Technology Data Exchange (ETDEWEB)
Eberhard, A. J.
2014-07-01
Seismicity forecasting has three major challenges: (1) the development of models or algorithms to forecast of seismicity, (2) the evaluation and testing of the forecasts and (3) the need for an appropriate software package. The goal of my thesis is to improve seismicity forecasting. I do this by contributing to a solution for all of the three challenges, using data from the West Pacific and from Iceland. The thesis is split into three chapters, each focusing on one of the challenges and each chapter will be (or already is) published in a peer-reviewed scientific journal. This first chapter (chapter 2 in this thesis) is about testing of seismic forecasts. The Collaboratory for the Study of Earthquake Predictability (CSEP, www.cseptesting.org; Jordan, 2006) has been conducting an earthquake forecast experiment in the western Pacific. The data and forecasts of this experiment serve as example for this chapter. For the three participating statistical models, I analyze the first four years of this experiment. I use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes, and I apply measures based on Student’s t-test and the Wilcoxon signed-rank test to compare the forecasts. I estimate also the uncertainties of the test results resulting from uncertainties in earthquake location and seismic moment. The estimated uncertainties are relatively small and suggest that the evaluation metrics are relatively robust. In chapter 3 I focus on the forecast itself with an Operational Earthquake Forecasting (OEF) experiment–forecasting seismicity in near real-time–for Iceland. The goal is to estimate the feasibility of OEF with the state-of-the-art models. I use the period between 2007 and 2010 and a sparsely processed catalogue as basis for a retrospective forecasting experiment with next-day forecasts. To better understand the effect of updating cycle on the probability gain I also use scheme in which the forecasts are
Ozcep, T.; Ozcep, F.
2012-04-01
Natural disaster reduction focuses on the urgent need for prevention activities to reduce loss of life, damage to property, infrastructure and environment, and the social and economic disruption caused by natural hazards. One of the most important factors in reduction of the potential damage of earthquakes is trained manpower. To understanding the causes of earthquakes and other natural phenomena (landslides, avalanches, floods, volcanoes, etc.) is one of the pre-conditions to show a conscious behavior. The aim of the study is to analysis and to investigate, how earthquakes and other natural phenomena are perceived by the students and the possible consequences of this perception, and their effects of reducing earthquake damage. One of the crucial questions is that is our education system fear or curiosity based education system? Effects of the damages due to earthquakes have led to look like a fear subject. In fact, due to the results of the effects, the earthquakes are perceived scary phenomena. In the first stage of the project, the learning (or perception) levels of earthquakes and other natural disasters for the students of primary school are investigated with a survey. Aim of this survey study of earthquakes and other natural phenomena is that have the students fear based or curiosity based approaching to the earthquakes and other natural events. In the second stage of the project, the path obtained by the survey are evaluated with the statistical point of approach. A questionnaire associated with earthquakes and natural disasters are applied to primary school students (that total number of them is approximately 700 pupils) to measure the curiosity and/or fear levels. The questionnaire consists of 17 questions related to natural disasters. The questions are: "What is the Earthquake ?", "What is power behind earthquake?", "What is the mental response during the earthquake ?", "Did we take lesson from earthquake's results ?", "Are you afraid of earthquake
Response of base isolated structure during strong ground motions beyond design earthquakes
International Nuclear Information System (INIS)
Yabana, Shuichi; Ishida, Katsuhiko; Shiojiri, Hiroo
1991-01-01
In Japan, some base isolated structures for fast breeder reactors (FBR) are tried to design. When a base isolated structure are designed, the relative displacement of isolators are generally limited so sa to be remain in linear state of those during design earthquakes. But to estimate safety margin of a base isolated structure, the response of that until the failure must be obtained experimentally to analytically during strong ground motions of beyond design earthquake. The aim of this paper is to investigate the response of a base isolated structure when the stiffness of the isolators hardens and to simulate the response during strong ground motions of beyond design earthquakes. The optimum characteristics of isolators, with which the margin of the structure are increased, are discussed. (author)
International Nuclear Information System (INIS)
Chen, Peng; Wu, Jian; Liu, Yaolin; Wang, Jing
2014-01-01
At present, the extraction of earthquake disaster information from remote sensing data relies on visual interpretation. However, this technique cannot effectively and quickly obtain precise and efficient information for earthquake relief and emergency management. Collapsed buildings in the town of Zipingpu after the Wenchuan earthquake were used as a case study to validate two kinds of rapid extraction methods for earthquake-collapsed building information based on pixel-oriented and object-oriented theories. The pixel-oriented method is based on multi-layer regional segments that embody the core layers and segments of the object-oriented method. The key idea is to mask layer by layer all image information, including that on the collapsed buildings. Compared with traditional techniques, the pixel-oriented method is innovative because it allows considerably rapid computer processing. As for the object-oriented method, a multi-scale segment algorithm was applied to build a three-layer hierarchy. By analyzing the spectrum, texture, shape, location, and context of individual object classes in different layers, the fuzzy determined rule system was established for the extraction of earthquake-collapsed building information. We compared the two sets of results using three variables: precision assessment, visual effect, and principle. Both methods can extract earthquake-collapsed building information quickly and accurately. The object-oriented method successfully overcomes the pepper salt noise caused by the spectral diversity of high-resolution remote sensing data and solves the problem of same object, different spectrums and that of same spectrum, different objects. With an overall accuracy of 90.38%, the method achieves more scientific and accurate results compared with the pixel-oriented method (76.84%). The object-oriented image analysis method can be extensively applied in the extraction of earthquake disaster information based on high-resolution remote sensing
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
An EMD–SARIMA-Based Modeling Approach for Air Traffic Forecasting
Directory of Open Access Journals (Sweden)
Wei Nai
2017-12-01
Full Text Available The ever-increasing air traffic demand in China has brought huge pressure on the planning and management of, and investment in, air terminals as well as airline companies. In this context, accurate and adequate short-term air traffic forecasting is essential for the operations of those entities. In consideration of such a problem, a hybrid air traffic forecasting model based on empirical mode decomposition (EMD and seasonal auto regressive integrated moving average (SARIMA has been proposed in this paper. The model proposed decomposes the original time series into components at first, and models each component with the SARIMA forecasting model, then integrates all the models together to form the final combined forecast result. By using the monthly air cargo and passenger flow data from the years 2006 to 2014 available at the official website of the Civil Aviation Administration of China (CAAC, the effectiveness in forecasting of the model proposed has been demonstrated, and by a horizontal performance comparison between several other widely used forecasting models, the advantage of the proposed model has also been proved.
International Nuclear Information System (INIS)
Halepoto, I.A.; Uqaili, M.A.
2014-01-01
Nowadays, due to power crisis, electricity demand forecasting is deemed an important area for socioeconomic development and proper anticipation of the load forecasting is considered essential step towards efficient power system operation, scheduling and planning. In this paper, we present STLF (Short Term Load Forecasting) using multiple regression techniques (i.e. linear, multiple linear, quadratic and exponential) by considering hour by hour load model based on specific targeted day approach with temperature variant parameter. The proposed work forecasts the future load demand correlation with linear and non-linear parameters (i.e. considering temperature in our case) through different regression approaches. The overall load forecasting error is 2.98% which is very much acceptable. From proposed regression techniques, Quadratic Regression technique performs better compared to than other techniques because it can optimally fit broad range of functions and data sets. The work proposed in this paper, will pave a path to effectively forecast the specific day load with multiple variance factors in a way that optimal accuracy can be maintained. (author)
Short-Term Load Forecasting Based on the Analysis of User Electricity Behavior
Directory of Open Access Journals (Sweden)
Yuancheng Li
2016-11-01
Full Text Available The smart meter is an important part of the smart grid, and in order to take full advantage of smart meter data, this paper mines the electricity behaviors of smart meter users to improve the accuracy of load forecasting. First, the typical day loads of users are calculated separately according to different date types (ordinary workdays, day before holidays, holidays. Second, the similarity between user electricity behaviors is mined and the user electricity loads are clustered to classify the users with similar behaviors into the same cluster. Finally, the load forecasting model based on the Online Sequential Extreme Learning Machine (OS-ELM is applied to different clusters to conduct load forecasting and the load forecast is summed to obtain the system load. In order to prove the validity of the proposed method, we performed simulation experiments on the MATLAB platform using smart meter data from the Ireland electric power cooperation. The experimental results show that the proposed method is able to mine the user electricity behaviors deeply, improve the accuracy of load forecasting by the reasonable clustering of users, and reveal the relationship between forecasting accuracy and cluster numbers.
International Nuclear Information System (INIS)
Chai, Soo H.; Lim, Joon S.
2016-01-01
This study presents a forecasting model of cyclical fluctuations of the economy based on the time delay coordinate embedding method. The model uses a neuro-fuzzy network called neural network with weighted fuzzy membership functions (NEWFM). The preprocessed time series of the leading composite index using the time delay coordinate embedding method are used as input data to the NEWFM to forecast the business cycle. A comparative study is conducted using other methods based on wavelet transform and Principal Component Analysis for the performance comparison. The forecasting results are tested using a linear regression analysis to compare the approximation of the input data against the target class, gross domestic product (GDP). The chaos based model captures nonlinear dynamics and interactions within the system, which other two models ignore. The test results demonstrated that chaos based method significantly improved the prediction capability, thereby demonstrating superior performance to the other methods.
Multicomponent ensemble models to forecast induced seismicity
Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.
2018-01-01
In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels
Urban MEMS based seismic network for post-earthquakes rapid disaster assessment
D'Alessandro, Antonino; Luzio, Dario; D'Anna, Giuseppe
2014-05-01
worship. The waveforms recorded could be promptly used to determine ground-shaking parameters, like peak ground acceleration/velocity/displacement, Arias and Housner intensity, that could be all used to create, few seconds after a strong earthquakes, shaking maps at urban scale. These shaking maps could allow to quickly identify areas of the town center that have had the greatest earthquake resentment. When a strong seismic event occur, the beginning of the ground motion observed at the site could be used to predict the ensuing ground motion at the same site and so to realize a short term earthquake early warning system. The data acquired after a moderate magnitude earthquake, would provide valuable information for the detail seismic microzonation of the area based on direct earthquake shaking observations rather than from a model-based or indirect methods. In this work, we evaluate the feasibility and effectiveness of such seismic network taking in to account both technological, scientific and economic issues. For this purpose, we have simulated the creation of a MEMS based urban seismic network in a medium size city. For the selected town, taking into account the instrumental specifics, the array geometry and the environmental noise, we investigated the ability of the planned network to detect and measure earthquakes of different magnitude generated from realistic near seismogentic sources.
Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2018-04-01
A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.
An improvement of the Earthworm Based Earthquake Alarm Reporting system in Taiwan
Chen, D. Y.; Hsiao, N. C.; Yih-Min, W.
2017-12-01
The Central Weather Bureau of Taiwan (CWB) has operated the Earthworm Based Earthquake Alarm Reporting (eBEAR) system for the purpose of earthquake early warning (EEW). The system has been used to report EEW messages to the general public since 2016 through text message from the mobile phones and the television programs. The system for inland earthquakes is able to provide accurate and fast warnings. The average epicenter error is about 5 km and the processing time is about 15 seconds. The epicenter error is defined as the distance between the epicenter estimated by the EEW system and the epicenter estimated by man. The processing time is defined as the time difference between the time earthquakes occurred and the time the system issued warning. The CWB seismic network consist about 200 seismic stations. In some area of Taiwan the distance between each seismic station is about 10 km. It means that when an earthquake occurred the seismic P wave is able to propagate through 6 stations, which is the minimum number of required stations in the EEW system, within 20 km. If the latency of data transmitting is about 1 sec, the P-wave velocity is about 6 km per sec and we take 3-sec length time window to estimate earthquake magnitude, then the processing should be around 8 sec. In fact, however, the average processing time is larger than this figure. Because some outliers of P-wave onset picks may exist in the beginning of the earthquake occurrence, the Geiger's method we used in the EEW system for earthquake location is not stable. It usually takes more time to wait for enough number of good picks. In this study we used grid search method to improve the estimations of earthquake location. The MAXEL algorithm (Sheen et al., 2015, 2016) was tested in the EEW system by simulating historical earthquakes occurred in Taiwan. The results show the processing time can be reduced and the location accuracy is acceptable for EEW purpose.
Stock prices forecasting based on wavelet neural networks with PSO
Directory of Open Access Journals (Sweden)
Wang Kai-Cheng
2017-01-01
Full Text Available This research examines the forecasting performance of wavelet neural network (WNN model using published stock data obtained from Financial Times Stock Exchange (FTSE Taiwan Stock Exchange (TWSE 50 index, also known as Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX, hereinafter referred to as Taiwan 50. Our WNN model uses particle swarm optimization (PSO to choose the appropriate initial network values for different companies. The findings come with two advantages. First, the network initial values are automatically selected instead of being a constant. Second, threshold and training data percentage become constant values, because PSO assists with self-adjustment. We can achieve a success rate over 73% without the necessity to manually adjust parameter or create another math model.
CAViaR-based forecast for oil price risk
International Nuclear Information System (INIS)
Huang, Dashan; Yu, Baimin; Fabozzi, Frank J.; Fukushima, Masao
2009-01-01
As a benchmark for measuring market risk, value-at-risk (VaR) reduces the risk associated with any kind of asset to just a number (amount in terms of a currency), which can be well understood by regulators, board members, and other interested parties. This paper employs a new VaR approach due to Engle and Manganelli [Engle, R.F., Manganelli, S., 2004. CAViaR: Conditional Autoregressive Value at Risk by Regression Quantiles. Journal of Business and Economic Statistics 22, 367-381] to forecasting oil price risk. In doing so, we provide two original contributions by introducing a new exponentially weighted moving average CAViaR model and developing a mixed data regression model for multi-period VaR prediction. (author)
Dugar, Sumit; Smith, Paul; Parajuli, Binod; Khanal, Sonu; Brown, Sarah; Gautam, Dilip; Bhandari, Dinanath; Gurung, Gehendra; Shakya, Puja; Kharbuja, RamGopal; Uprety, Madhab
2017-04-01
Operationalising effective Flood Early Warning Systems (EWS) in developing countries like Nepal poses numerous challenges, with complex topography and geology, sparse network of river and rainfall gauging stations and diverse socio-economic conditions. Despite these challenges, simple real-time monitoring based EWSs have been in place for the past decade. A key constraint of these simple systems is the very limited lead time for response - as little as 2-3 hours, especially for rivers originating from steep mountainous catchments. Efforts to increase lead time for early warning are focusing on imbedding forecasts into the existing early warning systems. In 2016, the Nepal Department of Hydrology and Meteorology (DHM) piloted an operational Probabilistic Flood Forecasting Model in major river basins across Nepal. This comprised a low data approach to forecast water levels, developed jointly through a research/practitioner partnership with Lancaster University and WaterNumbers (UK) and the International NGO Practical Action. Using Data-Based Mechanistic Modelling (DBM) techniques, the model assimilated rainfall and water levels to generate localised hourly flood predictions, which are presented as probabilistic forecasts, increasing lead times from 2-3 hours to 7-8 hours. The Nepal DHM has simultaneously started utilizing forecasts from the Global Flood Awareness System (GLoFAS) that provides streamflow predictions at the global scale based upon distributed hydrological simulations using numerical ensemble weather forecasts from the ECMWF (European Centre for Medium-Range Weather Forecasts). The aforementioned global and local models have already affected the approach to early warning in Nepal, being operational during the 2016 monsoon in the West Rapti basin in Western Nepal. On 24 July 2016, GLoFAS hydrological forecasts for the West Rapti indicated a sharp rise in river discharge above 1500 m3/sec (equivalent to the river warning level at 5 meters) with 53
ENSO-based probabilistic forecasts of March-May U.S. tornado and hail activity
Lepore, Chiara; Tippett, Michael K.; Allen, John T.
2017-09-01
Extended logistic regression is used to predict March-May severe convective storm (SCS) activity based on the preceding December-February (DJF) El Niño-Southern Oscillation (ENSO) state. The spatially resolved probabilistic forecasts are verified against U.S. tornado counts, hail events, and two environmental indices for severe convection. The cross-validated skill is positive for roughly a quarter of the U.S. Overall, indices are predicted with more skill than are storm reports, and hail events are predicted with more skill than tornado counts. Skill is higher in the cool phase of ENSO (La Niña like) when overall SCS activity is higher. SCS forecasts based on the predicted DJF ENSO state from coupled dynamical models initialized in October of the previous year extend the lead time with only a modest reduction in skill compared to forecasts based on the observed DJF ENSO state.
Exploring the applicability of future air quality predictions based on synoptic system forecasts
International Nuclear Information System (INIS)
Yuval; Broday, David M.; Alpert, Pinhas
2012-01-01
For a given emissions inventory, the general levels of air pollutants and the spatial distribution of their concentrations are determined by the physiochemical state of the atmosphere. Apart from the trivial seasonal and daily cycles, most of the variability is associated with the atmospheric synoptic scale. A simple methodology for assessing future levels of air pollutants' concentrations based on synoptic forecasts is presented. At short time scales the methodology is comparable and slightly better than persistence and seasonal forecasts at categorical classification of pollution levels. It's utility is shown for air quality studies at the long time scale of a changing climate scenario, where seasonality and persistence cannot be used. It is demonstrated that the air quality variability due to changes in the pollution emissions can be expected to be much larger than that associated with the effects of climatic changes. - Highlights: ► A method for short and long term air quality forecasts is introduced. ► The method is based on prediction of synoptic systems. ► The method beats simple benchmarks in short term forecasts. ► Assessment of future air pollution in a changing climate scenario is demonstrated. - Air quality in a changing climate scenario can be studied using air pollution predictions based on synoptic system forecasts.
GPS-based PWV for precipitation forecasting and its application to a typhoon event
Zhao, Qingzhi; Yao, Yibin; Yao, Wanqiang
2018-01-01
The temporal variability of precipitable water vapour (PWV) derived from Global Navigation Satellite System (GNSS) observations can be used to forecast precipitation events. A number of case studies of precipitation events have been analysed in Zhejiang Province, and a forecasting method for precipitation events was proposed. The PWV time series retrieved from the Global Positioning System (GPS) observations was processed by using a least-squares fitting method, so as to obtain the line tendency of ascents and descents over PWV. The increment of PWV for a short time (two to six hours) and PWV slope for a longer time (a few hours to more than ten hours) during the PWV ascending period are considered as predictive factors with which to forecast the precipitation event. The numerical results show that about 80%-90% of precipitation events and more than 90% of heavy rain events can be forecasted two to six hours in advance of the precipitation event based on the proposed method. 5-minute PWV data derived from GPS observations based on real-time precise point positioning (RT-PPP) were used for the typhoon event that passed over Zhejiang Province between 10 and 12 July, 2015. A good result was acquired using the proposed method and about 74% of precipitation events were predicted at some ten to thirty minutes earlier than their onset with a false alarm rate of 18%. This study shows that the GPS-based PWV was promising for short-term and now-casting precipitation forecasting.
Knowledge base about earthquakes as a tool to minimize strong events consequences
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Alexander; Kijko, Andrzej
2017-04-01
The paper describes the structure and content of the knowledge base on physical and socio-economical consequences of damaging earthquakes, which may be used for calibration of near real-time loss assessment systems based on simulation models for shaking intensity, damage to buildings and casualties estimates. Such calibration allows to compensate some factors which influence on reliability of expected damage and loss assessment in "emergency" mode. The knowledge base contains the description of past earthquakes' consequences for the area under study. It also includes the current distribution of built environment and population at the time of event occurrence. Computer simulation of the recorded in knowledge base events allow to determine the sets of regional calibration coefficients, including rating of seismological surveys, peculiarities of shaking intensity attenuation and changes in building stock and population distribution, in order to provide minimum error of damaging earthquakes loss estimations in "emergency" mode. References 1. Larionov, V., Frolova, N: Peculiarities of seismic vulnerability estimations. In: Natural Hazards in Russia, volume 6: Natural Risks Assessment and Management, Publishing House "Kruk", Moscow, 120-131, 2003. 2. Frolova, N., Larionov, V., Bonnin, J.: Data Bases Used In Worlwide Systems For Earthquake Loss Estimation In Emergency Mode: Wenchuan Earthquake. In Proc. TIEMS2010 Conference, Beijing, China, 2010. 3. Frolova N. I., Larionov V. I., Bonnin J., Sushchev S. P., Ugarov A. N., Kozlov M. A. Loss Caused by Earthquakes: Rapid Estimates. Natural Hazards Journal of the International Society for the Prevention and Mitigation of Natural Hazards, vol.84, ISSN 0921-030, Nat Hazards DOI 10.1007/s11069-016-2653
Robust Building Energy Load Forecasting Using Physically-Based Kernel Models
Directory of Open Access Journals (Sweden)
Anand Krishnan Prakash
2018-04-01
Full Text Available Robust and accurate building energy load forecasting is important for helping building managers and utilities to plan, budget, and strategize energy resources in advance. With recent prevalent adoption of smart-meters in buildings, a significant amount of building energy consumption data became available. Many studies have developed physics-based white box models and data-driven black box models to predict building energy consumption; however, they require extensive prior knowledge about building system, need a large set of training data, or lack robustness to different forecasting scenarios. In this paper, we introduce a new building energy forecasting method based on Gaussian Process Regression (GPR that incorporates physical insights about load data characteristics to improve accuracy while reducing training requirements. The GPR is a non-parametric regression method that models the data as a joint Gaussian distribution with mean and covariance functions and forecast using the Bayesian updating. We model the covariance function of the GPR to reflect the data patterns in different forecasting horizon scenarios, as prior knowledge. Our method takes advantage of the modeling flexibility and computational efficiency of the GPR while benefiting from the physical insights to further improve the training efficiency and accuracy. We evaluate our method with three field datasets from two university campuses (Carnegie Mellon University and Stanford University for both short- and long-term load forecasting. The results show that our method performs more accurately, especially when the training dataset is small, compared to other state-of-the-art forecasting models (up to 2.95 times smaller prediction error.
Psychological distress among Bam earthquake survivors in Iran: a population-based study.
Montazeri, Ali; Baradaran, Hamid; Omidvari, Sepideh; Azin, Seyed Ali; Ebadi, Mehdi; Garmaroudi, Gholamreza; Harirchi, Amir Mahmood; Shariati, Mohammad
2005-01-11
An earthquake measuring 6.3 on the Richter scale struck the city of Bam in Iran on the 26th of December 2003 at 5.26 A.M. It was devastating, and left over 40,000 dead and around 30,000 injured. The profound tragedy of thousands killed has caused emotional and psychological trauma for tens of thousands of people who have survived. A study was carried out to assess psychological distress among Bam earthquake survivors and factors associated with severe mental health in those who survived the tragedy. This was a population-based study measuring psychological distress among the survivors of Bam earthquake in Iran. Using a multi-stage stratified sampling method a random sample of individuals aged 15 years and over living in Bam were interviewed. Psychological distress was measured using the 12-item General Health Questionnaire (GHQ-12). In all 916 survivors were interviewed. The mean age of the respondents was 32.9 years (SD = 12.4), mostly were males (53%), married (66%) and had secondary school education (50%). Forty-one percent reported they lost 3 to 5 members of their family in the earthquake. In addition the findings showed that 58% of the respondents suffered from severe mental health as measured by the GHQ-12 and this was three times higher than reported psychological distress among the general population. There were significant differences between sub-groups of the study sample with regard to their psychological distress. The results of the logistic regression analysis also indicated that female gender; lower education, unemployment, and loss of family members were associated with severe psychological distress among earthquake victims. The study findings indicated that the amount of psychological distress among earthquake survivors was high and there is an urgent need to deliver mental health care to disaster victims in local medical settings and to reduce negative health impacts of the earthquake adequate psychological counseling is needed for those who
International Nuclear Information System (INIS)
Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu
2014-01-01
Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules
West, W. L., III (Principal Investigator)
1981-01-01
The content, format, and storage of data bases developed for the Foreign Commodity Production Forecasting project and used to produce normal crop calendars are described. In addition, the data bases may be used for agricultural meteorology, modeling of stage sequences and planting dates, and as indicators of possible drought and famine.
Forecast Based Financing for Managing Weather and Climate Risks to Reduce Potential Disaster Impacts
Arrighi, J.
2017-12-01
There is a critical window of time to reduce potential impacts of a disaster after a forecast for heightened risk is issued and before an extreme event occurs. The concept of Forecast-based Financing focuses on this window of opportunity. Through advanced preparation during system set-up, tailored methodologies are used to 1) analyze a range of potential extreme event forecasts, 2) identify emergency preparedness measures that can be taken when factoring in forecast lead time and inherent uncertainty and 3) develop standard operating procedures that are agreed on and tied to guaranteed funding sources to facilitate emergency measures led by the Red Cross or government actors when preparedness measures are triggered. This presentation will focus on a broad overview of the current state of theory and approaches used in developing a forecast-based financing systems - with a specific focus on hydrologic events, case studies of success and challenges in various contexts where this approach is being piloted, as well as what is on the horizon to be further explored and developed from a research perspective as the application of this approach continues to expand.
Olive Actual "on Year" Yield Forecast Tool Based on the Tree Canopy Geometry Using UAS Imagery.
Sola-Guirado, Rafael R; Castillo-Ruiz, Francisco J; Jiménez-Jiménez, Francisco; Blanco-Roldan, Gregorio L; Castro-Garcia, Sergio; Gil-Ribes, Jesus A
2017-07-30
Olive has a notable importance in countries of Mediterranean basin and its profitability depends on several factors such as actual yield, production cost or product price. Actual "on year" Yield (AY) is production (kg tree -1 ) in "on years", and this research attempts to relate it with geometrical parameters of the tree canopy. Regression equation to forecast AY based on manual canopy volume was determined based on data acquired from different orchard categories and cultivars during different harvesting seasons in southern Spain. Orthoimages were acquired with unmanned aerial systems (UAS) imagery calculating individual crown for relating to canopy volume and AY. Yield levels did not vary between orchard categories; however, it did between irrigated orchards (7000-17,000 kg ha -1 ) and rainfed ones (4000-7000 kg ha -1 ). After that, manual canopy volume was related with the individual crown area of trees that were calculated by orthoimages acquired with UAS imagery. Finally, AY was forecasted using both manual canopy volume and individual tree crown area as main factors for olive productivity. AY forecast only by using individual crown area made it possible to get a simple and cheap forecast tool for a wide range of olive orchards. Finally, the acquired information was introduced in a thematic map describing spatial AY variability obtained from orthoimage analysis that may be a powerful tool for farmers, insurance systems, market forecasts or to detect agronomical problems.
A system-theory-based model for monthly river runoff forecasting: model calibration and optimization
Directory of Open Access Journals (Sweden)
Wu Jianhua
2014-03-01
Full Text Available River runoff is not only a crucial part of the global water cycle, but it is also an important source for hydropower and an essential element of water balance. This study presents a system-theory-based model for river runoff forecasting taking the Hailiutu River as a case study. The forecasting model, designed for the Hailiutu watershed, was calibrated and verified by long-term precipitation observation data and groundwater exploitation data from the study area. Additionally, frequency analysis, taken as an optimization technique, was applied to improve prediction accuracy. Following model optimization, the overall relative prediction errors are below 10%. The system-theory-based prediction model is applicable to river runoff forecasting, and following optimization by frequency analysis, the prediction error is acceptable.
Sojda, Richard S.; Towler, Erin; Roberts, Mike; Rajagopalan, Balaji
2013-01-01
[1] Despite the influence of hydroclimate on river ecosystems, most efforts to date have focused on using climate information to predict streamflow for water supply. However, as water demands intensify and river systems are increasingly stressed, research is needed to explicitly integrate climate into streamflow forecasts that are relevant to river ecosystem management. To this end, we present a five step risk-based framework: (1) define risk tolerance, (2) develop a streamflow forecast model, (3) generate climate forecast ensembles, (4) estimate streamflow ensembles and associated risk, and (5) manage for climate risk. The framework is successfully demonstrated for an unregulated watershed in southwest Montana, where the combination of recent drought and water withdrawals has made it challenging to maintain flows needed for healthy fisheries. We put forth a generalized linear modeling (GLM) approach to develop a suite of tools that skillfully model decision-relevant low flow characteristics in terms of climate predictors. Probabilistic precipitation forecasts are used in conjunction with the GLMs, resulting in season-ahead prediction ensembles that provide the full risk profile. These tools are embedded in an end-to-end risk management framework that directly supports proactive fish conservation efforts. Results show that the use of forecasts can be beneficial to planning, especially in wet years, but historical precipitation forecasts are quite conservative (i.e., not very “sharp”). Synthetic forecasts show that a modest “sharpening” can strongly impact risk and improve skill. We emphasize that use in management depends on defining relevant environmental flows and risk tolerance, requiring local stakeholder involvement.
International Nuclear Information System (INIS)
Piltan, Mehdi; Shiri, Hiva; Ghaderi, S.F.
2012-01-01
Highlights: ► Investigating different fitness functions for evolutionary algorithms in energy forecasting. ► Energy forecasting of Iranian metal industry by value added, energy prices, investment and employees. ► Using real-coded instead of binary-coded genetic algorithm decreases energy forecasting error. - Abstract: Developing energy-forecasting models is known as one of the most important steps in long-term planning. In order to achieve sustainable energy supply toward economic development and social welfare, it is required to apply precise forecasting model. Applying artificial intelligent models for estimation complex economic and social functions is growing up considerably in many researches recently. In this paper, energy consumption in industrial sector as one of the critical sectors in the consumption of energy has been investigated. Two linear and three nonlinear functions have been used in order to forecast and analyze energy in the Iranian metal industry, Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) are applied to attain parameters of the models. The Real-Coded Genetic Algorithm (RCGA) has been developed based on real numbers, which is introduced as a new approach in the field of energy forecasting. In the proposed model, electricity consumption has been considered as a function of different variables such as electricity tariff, manufacturing value added, prevailing fuel prices, the number of employees, the investment in equipment and consumption in the previous years. Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Deviation (MAD) and Mean Absolute Percent Error (MAPE) are the four functions which have been used as the fitness function in the evolutionary algorithms. The results show that the logarithmic nonlinear model using PSO algorithm with 1.91 error percentage has the best answer. Furthermore, the prediction of electricity consumption in industrial sector of Turkey and also Turkish industrial sector
Short-term and long-term earthquake occurrence models for Italy: ETES, ERS and LTST
Directory of Open Access Journals (Sweden)
Maura Murru
2010-11-01
Full Text Available This study describes three earthquake occurrence models as applied to the whole Italian territory, to assess the occurrence probabilities of future (M ≥5.0 earthquakes: two as short-term (24 hour models, and one as long-term (5 and 10 years. The first model for short-term forecasts is a purely stochastic epidemic type earthquake sequence (ETES model. The second short-term model is an epidemic rate-state (ERS forecast based on a model that is physically constrained by the application to the earthquake clustering of the Dieterich rate-state constitutive law. The third forecast is based on a long-term stress transfer (LTST model that considers the perturbations of earthquake probability for interacting faults by static Coulomb stress changes. These models have been submitted to the Collaboratory for the Study of Earthquake Predictability (CSEP for forecast testing for Italy (ETH-Zurich, and they were locked down to test their validity on real data in a future setting starting from August 1, 2009.
Remote-sensing based approach to forecast habitat quality under climate change scenarios.
Directory of Open Access Journals (Sweden)
Juan M Requena-Mullor
Full Text Available As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071-2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of
Crime Forecasting System (An exploratory web-based approach
Directory of Open Access Journals (Sweden)
Yaseen Ahmed Meenai
2011-08-01
Full Text Available With the continuous rise in crimes in some big cities of the world like Karachi and the increasing complexity of these crimes, the difficulties the law enforcing agencies are facing in tracking down and taking out culprits have increased manifold. To help cut back the crime rate, a Crime Forecasting System (CFS can be used which uses historical information maintained by the local Police to help them predict crime patterns with the support of a huge and self-updating database. This system operates to prevent crime, helps in apprehending criminals, and to reduce disorder. This system is also vital in helping the law enforcers in forming a proactive approach by helping them in identifying early warning signs, take timely and necessary actions, and eventually help stop crime before it actually happens. It will also be beneficial in maintaining an up to date database of criminal suspects includes information on arrest records, communication with police department, associations with other known suspects, and membership in gangs/activist groups. After exploratory analysis of the online data acquired from the victims of these crimes, a broad picture of the scenario can be analyzed. The degree of vulnerability of an area at some particular moment can be highlighted by different colors aided by Google Maps. Some statistical diagrams have also been incorporated. The future of CFS can be seen as an information engine for the analysis, study and prediction of crimes.
Visualizing Confidence in Cluster-Based Ensemble Weather Forecast Analyses.
Kumpf, Alexander; Tost, Bianca; Baumgart, Marlene; Riemer, Michael; Westermann, Rudiger; Rautenhaus, Marc
2018-01-01
In meteorology, cluster analysis is frequently used to determine representative trends in ensemble weather predictions in a selected spatio-temporal region, e.g., to reduce a set of ensemble members to simplify and improve their analysis. Identified clusters (i.e., groups of similar members), however, can be very sensitive to small changes of the selected region, so that clustering results can be misleading and bias subsequent analyses. In this article, we - a team of visualization scientists and meteorologists-deliver visual analytics solutions to analyze the sensitivity of clustering results with respect to changes of a selected region. We propose an interactive visual interface that enables simultaneous visualization of a) the variation in composition of identified clusters (i.e., their robustness), b) the variability in cluster membership for individual ensemble members, and c) the uncertainty in the spatial locations of identified trends. We demonstrate that our solution shows meteorologists how representative a clustering result is, and with respect to which changes in the selected region it becomes unstable. Furthermore, our solution helps to identify those ensemble members which stably belong to a given cluster and can thus be considered similar. In a real-world application case we show how our approach is used to analyze the clustering behavior of different regions in a forecast of "Tropical Cyclone Karl", guiding the user towards the cluster robustness information required for subsequent ensemble analysis.
Reliable selection of earthquake ground motions for performance-based design
DEFF Research Database (Denmark)
Katsanos, Evangelos; Sextos, A.G.
2016-01-01
A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Prequalified code-compatible suites of seismic motions are provided through a multi-criterion approach to satisfy prescribed reduced variability...... of the method, by being subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv-sc) and the conventional index (δconv), already used by most existing code-based earthquake records selection and scaling procedures. The findings reveal the superiority...
International Nuclear Information System (INIS)
Joe, Yang Hee; Cho, Sung Gook
2003-01-01
This paper briefly introduces an improved method for evaluating seismic fragilities of components of nuclear power plants in Korea. Engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are also discussed in this paper. For the purpose of evaluating the effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures, several cases of comparative studies have been performed. The study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities. (author)
Dou, Aixia; Wang, Xiaoqing; Ding, Xiang; Du, Zecheng
2010-11-01
On the basis of the study on the enhancement methods of remote sensing images obtained after several earthquakes, the paper designed a new and optimized image enhancement model which was implemented by combining different single methods. The patterns of elementary model units and combined types of model were defined. Based on the enhancement model database, the algorithm of combinatorial model was brought out via C++ programming. The combined model was tested by processing the aerial remote sensing images obtained after 1976 Tangshan earthquake. It was proved that the definition and implementation of combined enhancement model can efficiently improve the ability and flexibility of image enhancement algorithm.
Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian
2017-01-01
We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.
Fuzzy time-series based on Fibonacci sequence for stock price forecasting
Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia
2007-07-01
Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.
Forecasting the magnitude and onset of El Niño based on climate network
Meng, Jun; Fan, Jingfang; Ashkenazy, Yosef; Bunde, Armin; Havlin, Shlomo
2018-04-01
El Niño is probably the most influential climate phenomenon on inter-annual time scales. It affects the global climate system and is associated with natural disasters; it has serious consequences in many aspects of human life. However, the forecasting of the onset and in particular the magnitude of El Niño are still not accurate enough, at least more than half a year ahead. Here, we introduce a new forecasting index based on climate network links representing the similarity of low frequency temporal temperature anomaly variations between different sites in the Niño 3.4 region. We find that significant upward trends in our index forecast the onset of El Niño approximately 1 year ahead, and the highest peak since the end of last El Niño in our index forecasts the magnitude of the following event. We study the forecasting capability of the proposed index on several datasets, including, ERA-Interim, NCEP Reanalysis I, PCMDI-AMIP 1.1.3 and ERSST.v5.
A space weather forecasting system with multiple satellites based on a self-recognizing network.
Tokumitsu, Masahiro; Ishida, Yoshiteru
2014-05-05
This paper proposes a space weather forecasting system at geostationary orbit for high-energy electron ﬂux (>2 MeV). The forecasting model involves multiple sensors on multiple satellites. The sensors interconnect and evaluate each other to predict future conditions at geostationary orbit. The proposed forecasting model is constructed using a dynamic relational network for sensor diagnosis and event monitoring. The sensors of the proposed model are located at different positions in space. The satellites for solar monitoring equip with monitoring devices for the interplanetary magnetic ﬁeld and solar wind speed. The satellites orbit near the Earth monitoring high-energy electron ﬂux. We investigate forecasting for typical two examples by comparing the performance of two models with different numbers of sensors. We demonstrate the prediction by the proposed model against coronal mass ejections and a coronal hole. This paper aims to investigate a possibility of space weather forecasting based on the satellite network with in-situ sensing.
A Space Weather Forecasting System with Multiple Satellites Based on a Self-Recognizing Network
Directory of Open Access Journals (Sweden)
Masahiro Tokumitsu
2014-05-01
Full Text Available This paper proposes a space weather forecasting system at geostationary orbit for high-energy electron ﬂux (>2 MeV. The forecasting model involves multiple sensors on multiple satellites. The sensors interconnect and evaluate each other to predict future conditions at geostationary orbit. The proposed forecasting model is constructed using a dynamic relational network for sensor diagnosis and event monitoring. The sensors of the proposed model are located at different positions in space. The satellites for solar monitoring equip with monitoring devices for the interplanetary magnetic ﬁeld and solar wind speed. The satellites orbit near the Earth monitoring high-energy electron ﬂux. We investigate forecasting for typical two examples by comparing the performance of two models with different numbers of sensors. We demonstrate the prediction by the proposed model against coronal mass ejections and a coronal hole. This paper aims to investigate a possibility of space weather forecasting based on the satellite network with in-situ sensing.
Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses
Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.
2017-12-01
To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number
Jalali, Mohammad; Ramazi, Hamidreza
2018-04-01
This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low
Williams, Jack G.; Rosser, Nick J.; Kincey, Mark E.; Benjamin, Jessica; Oven, Katie J.; Densmore, Alexander L.; Milledge, David G.; Robinson, Tom R.; Jordan, Colm A.; Dijkstra, Tom A.
2018-01-01
Landslides triggered by large earthquakes in mountainous regions contribute significantly to overall earthquake losses and pose a major secondary hazard that can persist for months or years. While scientific investigations of coseismic landsliding are increasingly common, there is no protocol for rapid (hours-to-days) humanitarian-facing landslide assessment and no published recognition of what is possible and what is useful to compile immediately after the event. Drawing on the 2015 Mw 7.8 Gorkha earthquake in Nepal, we consider how quickly a landslide assessment based upon manual satellite-based emergency mapping (SEM) can be realistically achieved and review the decisions taken by analysts to ascertain the timeliness and type of useful information that can be generated. We find that, at present, many forms of landslide assessment are too slow to generate relative to the speed of a humanitarian response, despite increasingly rapid access to high-quality imagery. Importantly, the value of information on landslides evolves rapidly as a disaster response develops, so identifying the purpose, timescales, and end users of a post-earthquake landslide assessment is essential to inform the approach taken. It is clear that discussions are needed on the form and timing of landslide assessments, and how best to present and share this information, before rather than after an earthquake strikes. In this paper, we share the lessons learned from the Gorkha earthquake, with the aim of informing the approach taken by scientists to understand the evolving landslide hazard in future events and the expectations of the humanitarian community involved in disaster response.
A Probabilistic Short-Term Water Demand Forecasting Model Based on the Markov Chain
Directory of Open Access Journals (Sweden)
Francesca Gagliardi
2017-07-01
Full Text Available This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods, were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.
Trip Travel Time Forecasting Based on Selective Forgetting Extreme Learning Machine
Directory of Open Access Journals (Sweden)
Zhiming Gui
2014-01-01
Full Text Available Travel time estimation on road networks is a valuable traffic metric. In this paper, we propose a machine learning based method for trip travel time estimation in road networks. The method uses the historical trip information extracted from taxis trace data as the training data. An optimized online sequential extreme machine, selective forgetting extreme learning machine, is adopted to make the prediction. Its selective forgetting learning ability enables the prediction algorithm to adapt to trip conditions changes well. Experimental results using real-life taxis trace data show that the forecasting model provides an effective and practical way for the travel time forecasting.
Forecasting Construction Cost Index based on visibility graph: A network approach
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
International Nuclear Information System (INIS)
Kosmopoulos, P.G.; Kazadzis, S.; Lagouvardos, K.; Kotroni, V.; Bais, A.
2015-01-01
The present study focuses on the predictions and verification of these predictions of solar energy using ground-based solar measurements from the Hellenic Network for Solar Energy and the National Observatory of Athens network, as well as solar radiation operational forecasts provided by the MM5 mesoscale model. The evaluation was carried out independently for the different networks, for two forecast horizons (1 and 2 days ahead), for the seasons of the year, for varying solar elevation, for the indicative energy potential of the area, and for four classes of cloud cover based on the calculated clearness index (k_t): CS (clear sky), SC (scattered clouds), BC (broken clouds) and OC (overcast). The seasonal dependence presented relative rRMSE (Root Mean Square Error) values ranging from 15% (summer) to 60% (winter), while the solar elevation dependence revealed a high effectiveness and reliability near local noon (rRMSE ∼30%). An increment of the errors with cloudiness was also observed. For CS with mean GHI (global horizontal irradiance) ∼ 650 W/m"2 the errors are 8%, for SC 20% and for BC and OC the errors were greater (>40%) but correspond to much lower radiation levels (<120 W/m"2) of consequently lower energy potential impact. The total energy potential for each ground station ranges from 1.5 to 1.9 MWh/m"2, while the mean monthly forecast error was found to be consistently below 10%. - Highlights: • Long term measurements at different atmospheric cases are needed for energy forecasting model evaluations. • The total energy potential at the Greek sites presented ranges from 1.5 to 1.9 MWh/m"2. • Mean monthly energy forecast errors are within 10% for all cases analyzed. • Cloud presence results of an additional forecast error that varies with the cloud cover.
Comparing the Accuracy of Copula-Based Multivariate Density Forecasts in Selected Regions of Support
C.G.H. Diks (Cees); V. Panchenko (Valentyn); O. Sokolinskiy (Oleg); D.J.C. van Dijk (Dick)
2013-01-01
textabstractThis paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample)
Comparing the accuracy of copula-based multivariate density forecasts in selected regions of support
Diks, C.; Panchenko, V.; Sokolinskiy, O.; van Dijk, D.
2013-01-01
This paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample) conditional
Model-Aided Altimeter-Based Water Level Forecasting System in Mekong River
Chang, C. H.; Lee, H.; Hossain, F.; Okeowo, M. A.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Anderson, E.; Hwang, E.
2017-12-01
Mekong River, one of the massive river systems in the world, has drainage area of about 795,000 km2 covering six countries. People living in its drainage area highly rely on resources given by the river in terms of agriculture, fishery, and hydropower. Monitoring and forecasting the water level in a timely manner, is urgently needed over the Mekong River. Recently, using TOPEX/Poseidon (T/P) altimetry water level measurements in India, Biancamaria et al. [2011] has demonstrated the capability of an altimeter-based flood forecasting system in Bangladesh, with RMSE from 0.6 - 0.8 m for lead times up to 5 days on 10-day basis due to T/P's repeat period. Hossain et al. [2013] further established a daily water level forecasting system in Bangladesh using observations from Jason-2 in India and HEC-RAS hydraulic model, with RMSE from 0.5 - 1.5 m and an underestimating mean bias of 0.25 - 1.25 m. However, such daily forecasting system relies on a collection of Jason-2 virtual stations (VSs) to ensure frequent sampling and data availability. Since the Mekong River is a meridional river with few number of VSs, the direct application of this system to the Mekong River becomes challenging. To address this problem, we propose a model-aided altimeter-based forecasting system. The discharge output by Variable Infiltration Capacity hydrologic model is used to reconstruct a daily water level product at upstream Jason-2 VSs based on the discharge-to-level rating curve. The reconstructed daily water level is then used to perform regression analysis with downstream in-situ water level to build regression models, which are used to forecast a daily water level. In the middle reach of the Mekong River from Nakhon Phanom to Kratie, a 3-day lead time forecasting can reach RMSE about 0.7 - 1.3 m with correlation coefficient around 0.95. For the lower reach of the Mekong River, the water flow becomes more complicated due to the reversal flow between the Tonle Sap Lake and the Mekong River
Development of a hybrid earthquake early warning system based on single sensor technique
International Nuclear Information System (INIS)
Gravirov, V.V.; Kislov, K.V.
2012-01-01
There are two methods to earthquake early warning system: the method based on a network of seismic stations and the single-sensor method. Both have advantages and drawbacks. The current systems rely on high density seismic networks. Attempts at implementing techniques based on the single-station principle encounter difficulties in the identification of earthquake in noise. The noise may be very diverse, from stationary to impulsive. It seems a promising line of research to develop hybrid warning systems with single-sensors being incorporated in the overall early warning network. This will permit using all advantages and will help reduce the radius of the hazardous zone where no earthquake warning can be produced. The main problems are highlighted and the solutions of these are discussed. The system is implemented to include three detection processes in parallel. The first is based on the study of the co-occurrence matrix of the signal wavelet transform. The second consists in using the method of a change point in a random process and signal detection in a moving time window. The third uses artificial neural networks. Further, applying a decision rule out the final earthquake detection is carried out and estimate its reliability. (author)
Fang, Yi; Huang, Yahong
2017-12-01
Conducting sand liquefaction estimated based on codes is the important content of the geotechnical design. However, the result, sometimes, fails to conform to the practical earthquake damages. Based on the damage of Tangshan earthquake and engineering geological conditions, three typical sites are chosen. Moreover, the sand liquefaction probability was evaluated on the three sites by using the method in the Code for Seismic Design of Buildings and the results were compared with the sand liquefaction phenomenon in the earthquake. The result shows that the difference between sand liquefaction estimated based on codes and the practical earthquake damage is mainly attributed to the following two aspects: The primary reasons include disparity between seismic fortification intensity and practical seismic oscillation, changes of groundwater level, thickness of overlying non-liquefied soil layer, local site effect and personal error. Meanwhile, although the judgment methods in the codes exhibit certain universality, they are another reason causing the above difference due to the limitation of basic data and the qualitative anomaly of the judgment formulas.
International Nuclear Information System (INIS)
Satou, Yoshihito; Wani, Masaaki; Wachi, Takamitsu
2017-01-01
Hamaoka Nuclear Power Station set up 'earthquake motion as the base for remodeling work' by referring to the strong earthquake fault model assumed by the 'Study meeting for Nankai Trough's mega thrust earthquake model' of the Cabinet Office. Based on this earthquake, it implemented seismic countermeasure work using ceramic fixing type post-construction shear reinforcement bars by targeting the Unit 4 water intake tank screen room. This construction work was carried out in a short period of about nine months due to a restriction in the drainage period of the water intake tank. Thanks to the improvement of process control, such as adoption of a two-shift (day and night) system, this work was completed. On the other hand, the quality of construction was secured by adopting a drilling machine with a resistance sensor at the time of drilling and plastic grout at the time of grout filling, as well as through quality inspection based on Construction Technology Review and Certification. (A.O.)
Earthquake recurrence models fail when earthquakes fail to reset the stress field
Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.
2012-01-01
Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.
Seismic demand evaluation based on actual earthquake records
International Nuclear Information System (INIS)
Jhaveri, D.P.; Czarnecki, R.M.; Kassawara, R.P.; Singh, A.
1990-01-01
Seismic input in the form of floor response spectra (FRS) are needed in seismic design and evaluation of equipment in nuclear power plants (NPPs). These are typically determined by analytical procedures using mathematical models of NPP structures and are known to be very conservative. Recorded earthquake data, in the form of acceleration response spectra computed from the recorded acceleration time histories, have been collected from NPP structures located in seismically active areas. Statistics of the ratios, or amplification factors, between the FRS at typical floors and the acceleration response spectra at the basemat or in the freefield, are obtained for typical NPP structures. These amplification factors are typically in terms of the peak spectral and zero period values, as well as a function of frequency. The average + 1σ values of these ratios, for those cases where enough data are available, are proposed to be used as limits to analytically calculated FRS, or for construction of simplified FRS for determining seismic input or demand in equipment qualification. (orig.)
International Nuclear Information System (INIS)
Arciniegas, Alvaro I.; Arciniegas Rueda, Ismael E.
2008-01-01
The Ontario Electricity Market (OEM), which opened in May 2002, is relatively new and is still under change. In addition, the bidding strategies of the participants are such that the relationships between price and fundamentals are non-linear and dynamic. The lack of market maturity and high complexity hinders the use of traditional statistical methodologies (e.g., regression analysis) for price forecasting. Therefore, a flexible model is needed to achieve good forecasting in OEM. This paper uses a Takagi-Sugeno-Kang (TSK) fuzzy inference system in forecasting the one-day-ahead real-time peak price of the OEM. The forecasting results of TSK are compared with those obtained by traditional statistical and neural network based forecasting. The comparison suggests that TSK has considerable value in forecasting one-day-ahead peak price in OEM. (author)
A hybrid wind power forecasting model based on data mining and wavelets analysis
International Nuclear Information System (INIS)
Azimi, R.; Ghofrani, M.; Ghayekhloo, M.
2016-01-01
Highlights: • An improved version of K-means algorithm is proposed for clustering wind data. • A persistence based method is applied to select the best cluster for NN training. • A combination of DWT and HANTS methods is used to provide a deep learning for NN. • A hybrid of T.S.B K-means, DWT and HANTS and NN is developed for wind forecasting. - Abstract: Accurate forecasting of wind power plays a key role in energy balancing and wind power integration into the grid. This paper proposes a novel time-series based K-means clustering method, named T.S.B K-means, and a cluster selection algorithm to better extract features of wind time-series data. A hybrid of T.S.B K-means, discrete wavelet transform (DWT) and harmonic analysis time series (HANTS) methods, and a multilayer perceptron neural network (MLPNN) is developed for wind power forecasting. The proposed T.S.B K-means classifies data into separate groups and leads to more appropriate learning for neural networks by identifying anomalies and irregular patterns. This improves the accuracy of the forecast results. A cluster selection method is developed to determine the cluster that provides the best training for the MLPNN. This significantly accelerates the forecast process as the most appropriate portion of the data rather than the whole data is used for the NN training. The wind power data is decomposed by the Daubechies D4 wavelet transform, filtered by the HANTS, and pre-processed to provide the most appropriate inputs for the MLPNN. Time-series analysis is used to pre-process the historical wind-power generation data and structure it into input-output series. Wind power datasets with diverse characteristics, from different wind farms located in the United States, are used to evaluate the accuracy of the hybrid forecasting method through various performance measures and different experiments. A comparative analysis with well-established forecasting models shows the superior performance of the proposed
Luo, Li; Luo, Le; Zhang, Xinli; He, Xiaoli
2017-07-10
Accurate forecasting of hospital outpatient visits is beneficial for the reasonable planning and allocation of healthcare resource to meet the medical demands. In terms of the multiple attributes of daily outpatient visits, such as randomness, cyclicity and trend, time series methods, ARIMA, can be a good choice for outpatient visits forecasting. On the other hand, the hospital outpatient visits are also affected by the doctors' scheduling and the effects are not pure random. Thinking about the impure specialty, this paper presents a new forecasting model that takes cyclicity and the day of the week effect into consideration. We formulate a seasonal ARIMA (SARIMA) model on a daily time series and then a single exponential smoothing (SES) model on the day of the week time series, and finally establish a combinatorial model by modifying them. The models are applied to 1 year of daily visits data of urban outpatients in two internal medicine departments of a large hospital in Chengdu, for forecasting the daily outpatient visits about 1 week ahead. The proposed model is applied to forecast the cross-sectional data for 7 consecutive days of daily outpatient visits over an 8-weeks period based on 43 weeks of observation data during 1 year. The results show that the two single traditional models and the combinatorial model are simplicity of implementation and low computational intensiveness, whilst being appropriate for short-term forecast horizons. Furthermore, the combinatorial model can capture the comprehensive features of the time series data better. Combinatorial model can achieve better prediction performance than the single model, with lower residuals variance and small mean of residual errors which needs to be optimized deeply on the next research step.
A review on remotely sensed land surface temperature anomaly as an earthquake precursor
Bhardwaj, Anshuman; Singh, Shaktiman; Sam, Lydia; Joshi, P. K.; Bhardwaj, Akanksha; Martín-Torres, F. Javier; Kumar, Rajesh
2017-12-01
The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.
Earthquake risk assessment of building structures
International Nuclear Information System (INIS)
Ellingwood, Bruce R.
2001-01-01
During the past two decades, probabilistic risk analysis tools have been applied to assess the performance of new and existing building structural systems. Structural design and evaluation of buildings and other facilities with regard to their ability to withstand the effects of earthquakes requires special considerations that are not normally a part of such evaluations for other occupancy, service and environmental loads. This paper reviews some of these special considerations, specifically as they pertain to probability-based codified design and reliability-based condition assessment of existing buildings. Difficulties experienced in implementing probability-based limit states design criteria for earthquake are summarized. Comparisons of predicted and observed building damage highlight the limitations of using current deterministic approaches for post-earthquake building condition assessment. The importance of inherent randomness and modeling uncertainty in forecasting building performance is examined through a building fragility assessment of a steel frame with welded connections that was damaged during the Northridge Earthquake of 1994. The prospects for future improvements in earthquake-resistant design procedures based on a more rational probability-based treatment of uncertainty are examined
Applying Binary Forecasting Approaches to Induced Seismicity in the Western Canada Sedimentary Basin
Kahue, R.; Shcherbakov, R.
2016-12-01
The Western Canada Sedimentary Basin has been chosen as a focus due to an increase in the recent observed seismicity there which is most likely linked to anthropogenic activities related to unconventional oil and gas exploration. Seismicity caused by these types of activities is called induced seismicity. The occurrence of moderate to larger induced earthquakes in areas where critical infrastructure is present can be potentially problematic. Here we use a binary forecast method to analyze past seismicity and well production data in order to quantify future areas of increased seismicity. This method splits the given region into spatial cells. The binary forecast method used here has been suggested in the past to retroactively forecast large earthquakes occurring globally in areas called alarm cells. An alarm cell, or alert zone, is a bin in which there is a higher likelihood for earthquakes to occur based on previous data. The first method utilizes the cumulative Benioff strain, based on earthquakes that had occurred in each bin above a given magnitude over a time interval called the training period. The second method utilizes the cumulative well production data within each bin. Earthquakes that occurred within an alert zone in the retrospective forecast period contribute to the hit rate, while alert zones that did not have an earthquake occur within them in the forecast period contribute to the false alarm rate. In the resulting analysis the hit rate and false alarm rate are determined after optimizing and modifying the initial parameters using the receiver operating characteristic diagram. It is found that when modifying the cell size and threshold magnitude parameters within various training periods, hit and false alarm rates are obtained for specific regions in Western Canada using both recent seismicity and cumulative well production data. Certain areas are thus shown to be more prone to potential larger earthquakes based on both datasets. This has implications
Using data-driven agent-based models for forecasting emerging infectious diseases
Directory of Open Access Journals (Sweden)
Srinivasan Venkatramanan
2018-03-01
Full Text Available Producing timely, well-informed and reliable forecasts for an ongoing epidemic of an emerging infectious disease is a huge challenge. Epidemiologists and policy makers have to deal with poor data quality, limited understanding of the disease dynamics, rapidly changing social environment and the uncertainty on effects of various interventions in place. Under this setting, detailed computational models provide a comprehensive framework for integrating diverse data sources into a well-defined model of disease dynamics and social behavior, potentially leading to better understanding and actions. In this paper, we describe one such agent-based model framework developed for forecasting the 2014–2015 Ebola epidemic in Liberia, and subsequently used during the Ebola forecasting challenge. We describe the various components of the model, the calibration process and summarize the forecast performance across scenarios of the challenge. We conclude by highlighting how such a data-driven approach can be refined and adapted for future epidemics, and share the lessons learned over the course of the challenge. Keywords: Emerging infectious diseases, Agent-based models, Simulation optimization, Bayesian calibration, Ebola
Short-term load and wind power forecasting using neural network-based prediction intervals.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2014-02-01
Electrical power systems are evolving from today's centralized bulk systems to more decentralized systems. Penetrations of renewable energies, such as wind and solar power, significantly increase the level of uncertainty in power systems. Accurate load forecasting becomes more complex, yet more important for management of power systems. Traditional methods for generating point forecasts of load demands cannot properly handle uncertainties in system operations. To quantify potential uncertainties associated with forecasts, this paper implements a neural network (NN)-based method for the construction of prediction intervals (PIs). A newly introduced method, called lower upper bound estimation (LUBE), is applied and extended to develop PIs using NN models. A new problem formulation is proposed, which translates the primary multiobjective problem into a constrained single-objective problem. Compared with the cost function, this new formulation is closer to the primary problem and has fewer parameters. Particle swarm optimization (PSO) integrated with the mutation operator is used to solve the problem. Electrical demands from Singapore and New South Wales (Australia), as well as wind power generation from Capital Wind Farm, are used to validate the PSO-based LUBE method. Comparative results show that the proposed method can construct higher quality PIs for load and wind power generation forecasts in a short time.
Combined Forecasting of Rainfall Based on Fuzzy Clustering and Cross Entropy
Directory of Open Access Journals (Sweden)
Baohui Men
2017-12-01
Full Text Available Rainfall is an essential index to measure drought, and it is dependent upon various parameters including geographical environment, air temperature and pressure. The nonlinear nature of climatic variables leads to problems such as poor accuracy and instability in traditional forecasting methods. In this paper, the combined forecasting method based on data mining technology and cross entropy is proposed to forecast the rainfall with full consideration of the time-effectiveness of historical data. In view of the flaws of the fuzzy clustering method which is easy to fall into local optimal solution and low speed of operation, the ant colony algorithm is adopted to overcome these shortcomings and, as a result, refine the model. The method for determining weights is also improved by using the cross entropy. Besides, the forecast is conducted by analyzing the weighted average rainfall based on Thiessen polygon in the Beijing–Tianjin–Hebei region. Since the predictive errors are calculated, the results show that improved ant colony fuzzy clustering can effectively select historical data and enhance the accuracy of prediction so that the damage caused by extreme weather events like droughts and floods can be greatly lessened and even kept at bay.
Mutual Information-Based Inputs Selection for Electric Load Time Series Forecasting
Directory of Open Access Journals (Sweden)
Nenad Floranović
2013-02-01
Full Text Available Providing accurate load forecast to electric utility corporations is essential in order to reduce their operational costs and increase profits. Hence, training set selection is an important preprocessing step which has to be considered in practice in order to increase the accuracy of load forecasts. The usage of mutual information (MI has been recently proposed in regression tasks, mostly for feature selection and for identifying the real instances from training sets that contains noise and outliers. This paper proposes a methodology for the training set selection in a least squares support vector machines (LS-SVMs load forecasting model. A new application of the concept of MI is presented for the selection of a training set based on MI computation between initial training set instances and testing set instances. Accordingly, several LS-SVMs models have been trained, based on the proposed methodology, for hourly prediction of electric load for one day ahead. The results obtained from a real-world data set indicate that the proposed method increases the accuracy of load forecasting as well as reduces the size of the initial training set needed for model training.
New watershed-based climate forecast products for hydrologists and water managers
Baker, S. A.; Wood, A.; Rajagopalan, B.; Lehner, F.; Peng, P.; Ray, A. J.; Barsugli, J. J.; Werner, K.
2017-12-01
Operational sub-seasonal to seasonal (S2S) climate predictions have advanced in skill in recent years but are yet to be broadly utilized by stakeholders in the water management sector. While some of the challenges that relate to fundamental predictability are difficult or impossible to surmount, other hurdles related to forecast product formulation, translation, relevance, and accessibility can be directly addressed. These include products being misaligned with users' space-time needs, products disseminated in formats users cannot easily process, and products based on raw model outputs that are biased relative to user climatologies. In each of these areas, more can be done to bridge the gap by enhancing the usability, quality, and relevance of water-oriented predictions. In addition, water stakeholder impacts can benefit from short-range extremes predictions (such as 2-3 day storms or 1-week heat waves) at S2S time-scales, for which few products exist. We present interim results of a Research to Operations (R2O) effort sponsored by the NOAA MAPP Climate Testbed to (1) formulate climate prediction products so as to reduce hurdles to in water stakeholder adoption, and to (2) explore opportunities for extremes prediction at S2S time scales. The project is currently using CFSv2 and National Multi-Model Ensemble (NMME) reforecasts and forecasts to develop real-time watershed-based climate forecast products, and to train post-processing approaches to enhance the skill and reliability of raw real-time S2S forecasts. Prototype S2S climate data products (forecasts and associated skill analyses) are now being operationally staged at NCAR on a public website to facilitate further product development through interactions with water managers. Initial demonstration products include CFSv2-based bi-weekly climate forecasts (weeks 1-2, 2-3, and 3-4) for sub-regional scale hydrologic units, and NMME-based monthly and seasonal prediction products. Raw model mean skill at these time
International Nuclear Information System (INIS)
Azimi, R.; Ghayekhloo, M.; Ghofrani, M.
2016-01-01
Highlights: • A novel clustering approach is proposed based on the data transformation approach. • A novel cluster selection method based on correlation analysis is presented. • The proposed hybrid clustering approach leads to deep learning for MLPNN. • A hybrid forecasting method is developed to predict solar radiations. • The evaluation results show superior performance of the proposed forecasting model. - Abstract: Accurate forecasting of renewable energy sources plays a key role in their integration into the grid. This paper proposes a hybrid solar irradiance forecasting framework using a Transformation based K-means algorithm, named TB K-means, to increase the forecast accuracy. The proposed clustering method is a combination of a new initialization technique, K-means algorithm and a new gradual data transformation approach. Unlike the other K-means based clustering methods which are not capable of providing a fixed and definitive answer due to the selection of different cluster centroids for each run, the proposed clustering provides constant results for different runs of the algorithm. The proposed clustering is combined with a time-series analysis, a novel cluster selection algorithm and a multilayer perceptron neural network (MLPNN) to develop the hybrid solar radiation forecasting method for different time horizons (1 h ahead, 2 h ahead, …, 48 h ahead). The performance of the proposed TB K-means clustering is evaluated using several different datasets and compared with different variants of K-means algorithm. Solar datasets with different solar radiation characteristics are also used to determine the accuracy and processing speed of the developed forecasting method with the proposed TB K-means and other clustering techniques. The results of direct comparison with other well-established forecasting models demonstrate the superior performance of the proposed hybrid forecasting method. Furthermore, a comparative analysis with the benchmark solar
Maruya, Hiroaki
For most Japanese companies and organizations, the enormous damage of the Great East Japan Earthquake was more than expected. In addition to great tsunami and earthquake motion, the lack of electricity and fuel disturbed to business activities seriously, and they should be considered important constraint factors in future earthquakes. Furthermore, disruption of supply chains also led considerable decline of production in many industries across Japan and foreign countries. Therefore it becomes urgent need for Japanese government and industries to utilize the lessons of the Great Earthquake and execute effective countermeasures, considering great earthquakes such as Tonankai & Nankai earthquakes and Tokyo Inland Earthquakes. Obviously most basic step is improving earthquake-resistant ability of buildings and facilities. In addition the spread of BCP and BCM to enterprises and organizations is indispensable. Based on the lessons, the BCM should include the point of view of the supply chain management more clearly, and emphasize "substitute strategy" more explicitly because a company should survive even if it completely loses its present production base. The central and local governments are requested, in addition to develop their own BCP, to improve related systematic conditions for BCM of the private sectors.
Development of damage probability matrices based on Greek earthquake damage data
Eleftheriadou, Anastasia K.; Karabinis, Athanasios I.
2011-03-01
A comprehensive study is presented for empirical seismic vulnerability assessment of typical structural types, representative of the building stock of Southern Europe, based on a large set of damage statistics. The observational database was obtained from post-earthquake surveys carried out in the area struck by the September 7, 1999 Athens earthquake. After analysis of the collected observational data, a unified damage database has been created which comprises 180,945 damaged buildings from/after the near-field area of the earthquake. The damaged buildings are classified in specific structural types, according to the materials, seismic codes and construction techniques in Southern Europe. The seismic demand is described in terms of both the regional macroseismic intensity and the ratio α g/ a o, where α g is the maximum peak ground acceleration (PGA) of the earthquake event and a o is the unique value PGA that characterizes each municipality shown on the Greek hazard map. The relative and cumulative frequencies of the different damage states for each structural type and each intensity level are computed in terms of damage ratio. Damage probability matrices (DPMs) and vulnerability curves are obtained for specific structural types. A comparison analysis is fulfilled between the produced and the existing vulnerability models.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
Thomas, Marion Y.; Bhat, Harsha S.
2018-05-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
Thomas, M. Y.; Bhat, H. S.
2017-12-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Bossu, Remy; Landes, Matthieu; Roussel, Frederic; Steed, Robert; Mazet-Roux, Gilles; Martin, Stacey S.; Hough, Susan E.
2017-01-01
The collection of earthquake testimonies (i.e., qualitative descriptions of felt shaking) is essential for macroseismic studies (i.e., studies gathering information on how strongly an earthquake was felt in different places), and when done rapidly and systematically, improves situational awareness and in turn can contribute to efficient emergency response. In this study, we present advances made in the collection of testimonies following earthquakes around the world using a thumbnail‐based questionnaire implemented on the European‐Mediterranean Seismological Centre (EMSC) smartphone app and its website compatible for mobile devices. In both instances, the questionnaire consists of a selection of thumbnails, each representing an intensity level of the European Macroseismic Scale 1998. We find that testimonies are collected faster, and in larger numbers, by way of thumbnail‐based questionnaires than by more traditional online questionnaires. Responses were received from all seismically active regions of our planet, suggesting that thumbnails overcome language barriers. We also observed that the app is not sufficient on its own, because the websites are the main source of testimonies when an earthquake strikes a region for the first time in a while; it is only for subsequent shocks that the app is widely used. Notably though, the speed of the collection of testimonies increases significantly when the app is used. We find that automated EMSC intensities as assigned by user‐specified thumbnails are, on average, well correlated with “Did You Feel It?” (DYFI) responses and with the three independently and manually derived macroseismic datasets, but there is a tendency for EMSC to be biased low with respect to DYFI at moderate and large intensities. We address this by proposing a simple adjustment that will be verified in future earthquakes.
An Automated Weather Research and Forecasting (WRF)-Based Nowcasting System: Software Description
2013-10-01
14. ABSTRACT A Web service /Web interface software package has been engineered to address the need for an automated means to run the Weather Research...An Automated Weather Research and Forecasting (WRF)- Based Nowcasting System: Software Description by Stephen F. Kirby, Brian P. Reen, and...Based Nowcasting System: Software Description Stephen F. Kirby, Brian P. Reen, and Robert E. Dumais Jr. Computational and Information Sciences
GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK
Raeesi, M.; Sadeghi-Niaraki, A.
2013-01-01
Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the d...
A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.
Araújo, Ricardo de A
2012-04-01
Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
Long-term flow forecasts based on climate and hydrologic modeling: Uruguay River basin
Tucci, Carlos Eduardo Morelli; Clarke, Robin Thomas; Collischonn, Walter; da Silva Dias, Pedro Leite; de Oliveira, Gilvan Sampaio
2003-07-01
This paper describes a procedure for predicting seasonal flow in the Rio Uruguay drainage basin (area 75,000 km2, lying in Brazilian territory), using sequences of future daily rainfall given by the global climate model (GCM) of the Brazilian agency for climate prediction (Centro de Previsão de Tempo e Clima, or CPTEC). Sequences of future daily rainfall given by this model were used as input to a rainfall-runoff model appropriate for large drainage basins. Forecasts of flow in the Rio Uruguay were made for the period 1995-2001 of the full record, which began in 1940. Analysis showed that GCM forecasts underestimated rainfall over almost all the basin, particularly in winter, although interannual variability in regional rainfall was reproduced relatively well. A statistical procedure was used to correct for the underestimation of rainfall. When the corrected rainfall sequences were transformed to flow by the hydrologic model, forecasts of flow in the Rio Uruguay basin were better than forecasts based on historic mean or median flows by 37% for monthly flows and by 54% for 3-monthly flows.
Dynamic Critical Rainfall-Based Flash Flood Early Warning and Forecasting for Medium-Small Rivers
Liu, Z.; Yang, D.; Hu, J.
2012-04-01
China is extremely frequent food disasters hit countries, annual flood season flash floods triggered by rainfall, mudslides, landslides have caused heavy casualties and property losses, not only serious threaten the lives of the masses, but the majority of seriously restricting the mountain hill areas of economic and social development and the people become rich, of building a moderately prosperous society goals. In the next few years, China will focus on prevention and control area in the flash flood disasters initially built "for the surveillance, communications, forecasting, early warning and other non-engineering measure based, non-engineering measures and the combinations of engineering measures," the mitigation system. The latest progresses on global torrential flood early warning and forecasting techniques are reviewed in this paper, and then an early warning and forecasting approach is proposed on the basis of a distributed hydrological model according to dynamic critical rainfall index. This approach has been applied in Suichuanjiang River basin in Jiangxi province, which is expected to provide valuable reference for building a national flash flood early warning and forecasting system as well as control of such flooding.
Forecasting Fossil Fuel Energy Consumption for Power Generation Using QHSA-Based LSSVM Model
Directory of Open Access Journals (Sweden)
Wei Sun
2015-01-01
Full Text Available Accurate forecasting of fossil fuel energy consumption for power generation is important and fundamental for rational power energy planning in the electricity industry. The least squares support vector machine (LSSVM is a powerful methodology for solving nonlinear forecasting issues with small samples. The key point is how to determine the appropriate parameters which have great effect on the performance of LSSVM model. In this paper, a novel hybrid quantum harmony search algorithm-based LSSVM (QHSA-LSSVM energy forecasting model is proposed. The QHSA which combines the quantum computation theory and harmony search algorithm is applied to searching the optimal values of and C in LSSVM model to enhance the learning and generalization ability. The case study on annual fossil fuel energy consumption for power generation in China shows that the proposed model outperforms other four comparative models, namely regression, grey model (1, 1 (GM (1, 1, back propagation (BP and LSSVM, in terms of prediction accuracy and forecasting risk.
Short-term electricity price forecast based on the improved hybrid model
International Nuclear Information System (INIS)
Dong Yao; Wang Jianzhou; Jiang He; Wu Jie
2011-01-01
Highlights: → The proposed models can detach high volatility and daily seasonality of electricity price. → The improved hybrid forecast models can make full use of the advantages of individual models. → The proposed models create commendable improvements that are relatively satisfactorily for current research. → The proposed models do not require making complicated decisions about the explicit form. - Abstract: Half-hourly electricity price in power system are volatile, electricity price forecast is significant information which can help market managers and participants involved in electricity market to prepare their corresponding bidding strategies to maximize their benefits and utilities. However, the fluctuation of electricity price depends on the common effect of many factors and there is a very complicated random in its evolution process. Therefore, it is difficult to forecast half-hourly prices with traditional only one model for different behaviors of half-hourly prices. This paper proposes the improved forecasting model that detaches high volatility and daily seasonality for electricity price of New South Wales in Australia based on Empirical Mode Decomposition, Seasonal Adjustment and Autoregressive Integrated Moving Average. The prediction errors are analyzed and compared with the ones obtained from the traditional Seasonal Autoregressive Integrated Moving Average model. The comparisons demonstrate that the proposed model can improve the prediction accuracy noticeably.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.
Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR
Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng
2017-06-01
The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.
Artificial intelligence based models for stream-flow forecasting: 2000-2015
Yaseen, Zaher Mundher; El-shafie, Ahmed; Jaafar, Othman; Afan, Haitham Abdulmohsin; Sayl, Khamis Naba
2015-11-01
The use of Artificial Intelligence (AI) has increased since the middle of the 20th century as seen in its application in a wide range of engineering and science problems. The last two decades, for example, has seen a dramatic increase in the development and application of various types of AI approaches for stream-flow forecasting. Generally speaking, AI has exhibited significant progress in forecasting and modeling non-linear hydrological applications and in capturing the noise complexity in the dataset. This paper explores the state-of-the-art application of AI in stream-flow forecasting, focusing on defining the data-driven of AI, the advantages of complementary models, as well as the literature and their possible future application in modeling and forecasting stream-flow. The review also identifies the major challenges and opportunities for prospective research, including, a new scheme for modeling the inflow, a novel method for preprocessing time series frequency based on Fast Orthogonal Search (FOS) techniques, and Swarm Intelligence (SI) as an optimization approach.
Use of Vertically Integrated Ice in WRF-Based Forecasts of Lightning Threat
McCaul, E. W., jr.; Goodman, S. J.
2008-01-01
Previously reported methods of forecasting lightning threat using fields of graupel flux from WRF simulations are extended to include the simulated field of vertically integrated ice within storms. Although the ice integral shows less temporal variability than graupel flux, it provides more areal coverage, and can thus be used to create a lightning forecast that better matches the areal coverage of the lightning threat found in observations of flash extent density. A blended lightning forecast threat can be constructed that retains much of the desirable temporal sensitivity of the graupel flux method, while also incorporating the coverage benefits of the ice integral method. The graupel flux and ice integral fields contributing to the blended forecast are calibrated against observed lightning flash origin density data, based on Lightning Mapping Array observations from a series of case studies chosen to cover a wide range of flash rate conditions. Linear curve fits that pass through the origin are found to be statistically robust for the calibration procedures.
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Short-term electricity price forecast based on the improved hybrid model
Energy Technology Data Exchange (ETDEWEB)
Dong Yao, E-mail: dongyao20051987@yahoo.cn [School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000 (China); Wang Jianzhou, E-mail: wjz@lzu.edu.cn [School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000 (China); Jiang He; Wu Jie [School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000 (China)
2011-08-15
Highlights: {yields} The proposed models can detach high volatility and daily seasonality of electricity price. {yields} The improved hybrid forecast models can make full use of the advantages of individual models. {yields} The proposed models create commendable improvements that are relatively satisfactorily for current research. {yields} The proposed models do not require making complicated decisions about the explicit form. - Abstract: Half-hourly electricity price in power system are volatile, electricity price forecast is significant information which can help market managers and participants involved in electricity market to prepare their corresponding bidding strategies to maximize their benefits and utilities. However, the fluctuation of electricity price depends on the common effect of many factors and there is a very complicated random in its evolution process. Therefore, it is difficult to forecast half-hourly prices with traditional only one model for different behaviors of half-hourly prices. This paper proposes the improved forecasting model that detaches high volatility and daily seasonality for electricity price of New South Wales in Australia based on Empirical Mode Decomposition, Seasonal Adjustment and Autoregressive Integrated Moving Average. The prediction errors are analyzed and compared with the ones obtained from the traditional Seasonal Autoregressive Integrated Moving Average model. The comparisons demonstrate that the proposed model can improve the prediction accuracy noticeably.
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Cloud-based systems for monitoring earthquakes and other environmental quantities
Clayton, R. W.; Olson, M.; Liu, A.; Chandy, M.; Bunn, J.; Guy, R.
2013-12-01
There are many advantages to using a cloud-based system to record and analyze environmental quantities such as earthquakes, radiation, various gases, dust and meteorological parameters. These advantages include robustness and dynamic scalability, and also reduced costs. In this paper, we present our experiences over the last three years in developing a cloud-based earthquake monitoring system (the Community Seismic Network). This network consists of over 600 sensors (accelerometers) in the S. California region that send data directly to the Google App Engine where they are analyzed. The system is capable of handing many other types of sensor data and generating a situation-awareness analysis as a product. Other advantages to the cloud-based system are integration with other peer networks, and being able to deploy anywhere in the world without have to build addition computing infrastructure.
Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks
Directory of Open Access Journals (Sweden)
Zhisheng Zhang
2016-01-01
Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.
Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.
2012-04-01
In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological
International Nuclear Information System (INIS)
1996-01-01
The Nuclear Safety Commission received the report on the results of examination from the ad hoc examination committee. There was no particular effect to atomic energy facilities in the Southern Hyogo Prefecture Earthquake, however, from the viewpoint of perfecting the safety confirmation for atomic energy facilities, the Nuclear Safety Commission set up the aseismatic safety examination committee to investigate the validity of the guidelines related to aseismatic design used for safety examination. The basic plan of the investigation, the outline of the guidelines related to aseismatic design, the state of Southern Hyogo Prefecture Earthquake and the obtained knowledge and the investigation of the validity of the guidelines related to aseismatic design based on the state of Southern Hyogo Prefecture Earthquake are reported. The extraction of the items to be investigated, the evaluation of earthquakes and earthquake motion, vertical earthquake force and active faults, and the way of thinking on right under type earthquakes in the guideline for aseismatic design examination are reported. It was confirmed that the validity of guidelines is not impaired by the earthquake. (K.I.)
Psychological distress among Bam earthquake survivors in Iran: a population-based study
Directory of Open Access Journals (Sweden)
Garmaroudi Gholamreza
2005-01-01
Full Text Available Abstract Background An earthquake measuring 6.3 on the Richter scale struck the city of Bam in Iran on the 26th of December 2003 at 5.26 A.M. It was devastating, and left over 40,000 dead and around 30,000 injured. The profound tragedy of thousands killed has caused emotional and psychological trauma for tens of thousands of people who have survived. A study was carried out to assess psychological distress among Bam earthquake survivors and factors associated with severe mental health in those who survived the tragedy. Methods This was a population-based study measuring psychological distress among the survivors of Bam earthquake in Iran. Using a multi-stage stratified sampling method a random sample of individuals aged 15 years and over living in Bam were interviewed. Psychological distress was measured using the 12-item General Health Questionnaire (GHQ-12. Results In all 916 survivors were interviewed. The mean age of the respondents was 32.9 years (SD = 12.4, mostly were males (53%, married (66% and had secondary school education (50%. Forty-one percent reported they lost 3 to 5 members of their family in the earthquake. In addition the findings showed that 58% of the respondents suffered from severe mental health as measured by the GHQ-12 and this was three times higher than reported psychological distress among the general population. There were significant differences between sub-groups of the study sample with regard to their psychological distress. The results of the logistic regression analysis also indicated that female gender; lower education, unemployment, and loss of family members were associated with severe psychological distress among earthquake victims. Conclusion The study findings indicated that the amount of psychological distress among earthquake survivors was high and there is an urgent need to deliver mental health care to disaster victims in local medical settings and to reduce negative health impacts of the earthquake
Development of a remote sensing-based rice yield forecasting model
Energy Technology Data Exchange (ETDEWEB)
Mosleh, M.K.; Hassan, Q.K.; Chowdhury, E.H.
2016-11-01
This study aimed to develop a remote sensing-based method for forecasting rice yield by considering vegetation greenness conditions during initial and peak greenness stages of the crop; and implemented for “boro” rice in Bangladeshi context. In this research, we used Moderate Resolution Imaging Spectroradiometer (MODIS)-derived two 16-day composite of normalized difference vegetation index (NDVI) images at 250 m spatial resolution acquired during the initial (January 1 to January 16) and peak greenness (March 23/24 to April 6/7 depending on leap year) stages in conjunction with secondary datasets (i.e., boro suitability map, and ground-based information) during 2007-2012 period. The method consisted of two components: (i) developing a model for delineating area under rice cultivation before harvesting; and (ii) forecasting rice yield as a function of NDVI. Our results demonstrated strong agreements between the model (i.e., MODIS-based) and ground-based area estimates during 2010-2012 period, i.e., coefficient of determination (R2); root mean square error (RMSE); and relative error (RE) in between 0.93 to 0.95; 30,519 to 37,451 ha; and ±10% respectively at the 23 district-levels. We also found good agreements between forecasted (i.e., MODIS-based) and ground-based yields during 2010-2012 period (R2 between 0.76 and 0.86; RMSE between 0.21 and 0.29 Mton/ha, and RE between -5.45% and 6.65%) at the 23 district-levels. We believe that our developments of forecasting the boro rice yield would be useful for the decision makers in addressing food security in Bangladesh. (Author)
Water availability forecasting for Naryn River using ground-based and satellite snow cover data
Directory of Open Access Journals (Sweden)
O. Y. Kalashnikova
2017-01-01
Full Text Available The main source of river nourishment in arid regions of Central Asia is the melting of seasonal snow accu‑ mulated in mountains during the cold period. In this study, we analyzed data on seasonal snow cover by ground‑based observations from Kyrgyzhydromet network, as well as from MODIS satellite imagery for the period of 2000–2015. This information was used to compile the forecast methods of water availability of snow‑ice and ice‑snow fed rivers for the vegetation period. The Naryn river basin was chosen as a study area which is the main tributary of Syrdarya River and belongs to the Aral Sea basin. The representative mete‑ orological stations with ground‑based observations of snow cover were identified and regression analysis between mean discharge for the vegetation period and number of snow covered days, maximum snow depth based on in‑situ data as well as snow cover area based on MODIS images was conducted. Based on this infor‑ mation, equations are derived for seasonal water availability forecasting using multiple linear regression anal‑ ysis. Proposed equations have high correlation coefficients (R = 0.89÷0.92 and and fore‑ casting accuracy. The methodology was implemented in Kyrgyzhydromet and is used for forecasting of water availability in Naryn basin and water inflow into Toktogul Reservoir.
Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting
Jungbacker, B.M.J.P.; Koopman, S.J.
2015-01-01
We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to
International Nuclear Information System (INIS)
Zhang, Wenyu; Qu, Zongxi; Zhang, Kequan; Mao, Wenqian; Ma, Yining; Fan, Xu
2017-01-01
Highlights: • A CEEMDAN-CLSFPA combined model is proposed for short-term wind speed forecasting. • The CEEMDAN technique is used to decompose the original wind speed series. • A modified optimization algorithm-CLSFPA is proposed to optimize the weights of the combined model. • The no negative constraint theory is applied to the combined model. • Robustness of the proposed model is validated by data sampled from four different wind farms. - Abstract: Wind energy, which is stochastic and intermittent by nature, has a significant influence on power system operation, power grid security and market economics. Precise and reliable wind speed prediction is vital for wind farm planning and operational planning for power grids. To improve wind speed forecasting accuracy, a large number of forecasting approaches have been proposed; however, these models typically do not account for the importance of data preprocessing and are limited by the use of individual models. In this paper, a novel combined model – combining complete ensemble empirical mode decomposition adaptive noise (CEEMDAN), flower pollination algorithm with chaotic local search (CLSFPA), five neural networks and no negative constraint theory (NNCT) – is proposed for short-term wind speed forecasting. First, a recent CEEMDAN is employed to divide the original wind speed data into a finite set of IMF components, and then a combined model, based on NNCT, is proposed for forecasting each decomposition signal. To improve the forecasting capacity of the combined model, a modified flower pollination algorithm (FPA) with chaotic local search (CLS) is proposed and employed to determine the optimal weight coefficients of the combined model, and the final prediction values were obtained by reconstructing the refined series. To evaluate the forecasting ability of the proposed combined model, 15-min wind speed data from four wind farms in the eastern coastal areas of China are used. The experimental results of
Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel
International Nuclear Information System (INIS)
Zhang, Yao; Wang, Jianxue; Luo, Xu
2015-01-01
Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods
BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.
2013-09-01
The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605
Microscale cavitation as a mechanism for nucleating earthquakes at the base of the seismogenic zone.
Verberne, Berend A; Chen, Jianye; Niemeijer, André R; de Bresser, Johannes H P; Pennock, Gillian M; Drury, Martyn R; Spiers, Christopher J
2017-11-21
Major earthquakes frequently nucleate near the base of the seismogenic zone, close to the brittle-ductile transition. Fault zone rupture at greater depths is inhibited by ductile flow of rock. However, the microphysical mechanisms responsible for the transition from ductile flow to seismogenic brittle/frictional behaviour at shallower depths remain unclear. Here we show that the flow-to-friction transition in experimentally simulated calcite faults is characterized by a transition from dislocation and diffusion creep to dilatant deformation, involving incompletely accommodated grain boundary sliding. With increasing shear rate or decreasing temperature, dislocation and diffusion creep become too slow to accommodate the imposed shear strain rate, leading to intergranular cavitation, weakening, strain localization, and a switch from stable flow to runaway fault rupture. The observed shear instability, triggered by the onset of microscale cavitation, provides a key mechanism for bringing about the brittle-ductile transition and for nucleating earthquakes at the base of the seismogenic zone.
Earthquake stand-by instruction device for nuclear power plant
International Nuclear Information System (INIS)
Nakagawa, Masaki; Ijima, Tadashi
1998-01-01
The magnitude of earthquakes is forecast at a high accuracy by disposing seismic detectors to a plurality of points distant from an object plant. The accuracy of the judgement for the magnitude of earthquakes can be improved by processing seismic movements by using seismic movements observed along with elapse of time. The measured seismic waveforms are always stored even during the processing time. With such procedures, when one processing is completed, processing can be conducted successively by using stored data, by which processing can be conducted by using all the data from the occurrence of the earthquakes. Then, the seismic movements can be estimated from an early stage of the occurrence of the earthquakes, and since the seismic movement can be judged based on a great amount of data with lapse of time, an appropriate stand-by instruction can be provided. (N.H.)
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.
2015-12-01
Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being
Directory of Open Access Journals (Sweden)
O. V. Russkov
2015-01-01
Full Text Available The article considers a hot issue to forecast electric power demand amounts and prices for the entities of wholesale electricity market (WEM, which are in capacity of a large user with production technology requirements prevailing over hourly energy planning ones. An electric power demand of such entities is on irregular schedule. The article analyses mathematical models, currently applied to forecast demand amounts and prices. It describes limits of time-series models and fundamental ones in case of hourly forecasting an irregular demand schedule of the electricity market entity. The features of electricity trading at WEM are carefully analysed. Factors that influence on irregularity of demand schedule of the metallurgical plant are shown. The article proposes method for the qualitative forecast of market price ratios as a tool to reduce a dependence on the accuracy of forecasting an irregular schedule of demand. It describes the differences between the offered method and the similar ones considered in research studies and scholarly works. The correlation between price ratios and relaxation in the requirements for the forecast accuracy of the electric power consumption is analysed. The efficiency function of forecast method is derived. The article puts an increased focus on description of the mathematical model based on the method of qualitative forecast. It shows main model parameters and restrictions the electricity market imposes on them. The model prototype is described as a programme module. Methods to assess an effectiveness of the proposed forecast model are examined. The positive test results of the model using JSC «Volzhsky Pipe Plant» data are given. A conclusion is drawn concerning the possibility to decrease dependence on the forecast accuracy of irregular schedule of entity’s demand at WEM. The effective trading tool has been found for the entities of irregular demand schedule at WEM. The tool application allows minimizing cost
Using ensemble weather forecast in a risk based real time optimization of urban drainage systems
DEFF Research Database (Denmark)
Courdent, Vianney Augustin Thomas; Vezzaro, Luca; Mikkelsen, Peter Steen
2015-01-01
Global Real Time Control (RTC) of urban drainage system is increasingly seen as cost-effective solution in order to respond to increasing performance demand (e.g. reduction of Combined Sewer Overflow, protection of sensitive areas as bathing water etc.). The Dynamic Overflow Risk Assessment (DORA......) strategy was developed to operate Urban Drainage Systems (UDS) in order to minimize the expected overflow risk by considering the water volume presently stored in the drainage network, the expected runoff volume based on a 2-hours radar forecast model and an estimated uncertainty of the runoff forecast....... However, such temporal horizon (1-2 hours) is relatively short when used for the operation of large storage facilities, which may require a few days to be emptied. This limits the performance of the optimization and control in reducing combined sewer overflow and in preparing for possible flooding. Based...
Response of base-isolated nuclear structures to extreme earthquake shaking
International Nuclear Information System (INIS)
Kumar, Manish; Whittaker, Andrew S.; Constantinou, Michael C.
2015-01-01
Highlights: • Response-history analysis of nuclear structures base-isolated using lead–rubber bearings is performed. • Advanced numerical model of lead–rubber bearing is used to capture behavior under extreme earthquake shaking. • Results of response-history analysis obtained using simplified and advanced model of lead–rubber bearings are compared. • Heating of the lead core and variation in buckling load and axial stiffness affect the response. - Abstract: Seismic isolation using low damping rubber and lead–rubber bearings is a viable strategy for mitigating the effects of extreme earthquake shaking on safety-related nuclear structures. The mechanical properties of these bearings are not expected to change substantially in design basis shaking. However, under shaking more intense than design basis, the properties of the lead cores in lead–rubber bearings may degrade due to heating associated with energy dissipation, some bearings in an isolation system may experience net tension, and the compression and tension stiffness may be affected by the lateral displacement of the isolation system. The effects of intra-earthquake changes in mechanical properties on the response of base-isolated nuclear power plants (NPPs) are investigated using an advanced numerical model of a lead–rubber bearing that has been verified and validated, and implemented in OpenSees. A macro-model is used for response-history analysis of base-isolated NPPs. Ground motions are selected and scaled to be consistent with response spectra for design basis and beyond design basis earthquake shaking at the site of the Diablo Canyon Nuclear Generating Station. Ten isolation systems of two periods and five characteristic strengths are analyzed. The responses obtained using simplified and advanced isolator models are compared. Strength degradation due to heating of lead cores and changes in buckling load most significantly affect the response of the base-isolated NPP.
Response of base-isolated nuclear structures to extreme earthquake shaking
Energy Technology Data Exchange (ETDEWEB)
Kumar, Manish, E-mail: mkumar2@buffalo.edu; Whittaker, Andrew S.; Constantinou, Michael C.
2015-12-15
Highlights: • Response-history analysis of nuclear structures base-isolated using lead–rubber bearings is performed. • Advanced numerical model of lead–rubber bearing is used to capture behavior under extreme earthquake shaking. • Results of response-history analysis obtained using simplified and advanced model of lead–rubber bearings are compared. • Heating of the lead core and variation in buckling load and axial stiffness affect the response. - Abstract: Seismic isolation using low damping rubber and lead–rubber bearings is a viable strategy for mitigating the effects of extreme earthquake shaking on safety-related nuclear structures. The mechanical properties of these bearings are not expected to change substantially in design basis shaking. However, under shaking more intense than design basis, the properties of the lead cores in lead–rubber bearings may degrade due to heating associated with energy dissipation, some bearings in an isolation system may experience net tension, and the compression and tension stiffness may be affected by the lateral displacement of the isolation system. The effects of intra-earthquake changes in mechanical properties on the response of base-isolated nuclear power plants (NPPs) are investigated using an advanced numerical model of a lead–rubber bearing that has been verified and validated, and implemented in OpenSees. A macro-model is used for response-history analysis of base-isolated NPPs. Ground motions are selected and scaled to be consistent with response spectra for design basis and beyond design basis earthquake shaking at the site of the Diablo Canyon Nuclear Generating Station. Ten isolation systems of two periods and five characteristic strengths are analyzed. The responses obtained using simplified and advanced isolator models are compared. Strength degradation due to heating of lead cores and changes in buckling load most significantly affect the response of the base-isolated NPP.
Directory of Open Access Journals (Sweden)
Yunxuan Dong
2017-04-01
Full Text Available The process of modernizing smart grid prominently increases the complexity and uncertainty in scheduling and operation of power systems, and, in order to develop a more reliable, flexible, efficient and resilient grid, electrical load forecasting is not only an important key but is still a difficult and challenging task as well. In this paper, a short-term electrical load forecasting model, with a unit for feature learning named Pyramid System and recurrent neural networks, has been developed and it can effectively promote the stability and security of the power grid. Nine types of methods for feature learning are compared in this work to select the best one for learning target, and two criteria have been employed to evaluate the accuracy of the prediction intervals. Furthermore, an electrical load forecasting method based on recurrent neural networks has been formed to achieve the relational diagram of historical data, and, to be specific, the proposed techniques are applied to electrical load forecasting using the data collected from New South Wales, Australia. The simulation results show that the proposed hybrid models can not only satisfactorily approximate the actual value but they are also able to be effective tools in the planning of smart grids.
Investigating market efficiency through a forecasting model based on differential equations
de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco
2017-05-01
A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.
Simple nuclear norm based algorithms for imputing missing data and forecasting in time series
Butcher, Holly Louise; Gillard, Jonathan William
2017-01-01
There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.
Workplace Electric Vehicle Solar Smart Charging based on Solar Irradiance Forecasting
Almquist, Isabelle; Lindblom, Ellen; Birging, Alfred
2017-01-01
The purpose of this bachelor thesis is to investigate different outcomes of the usage of photovoltaic (PV) power for electric vehicle (EV) charging adjacent to workplaces. In the investigated case, EV charging stations are assumed to be connected to photovoltaic systems as well as the electricity grid. The model used to simulate different scenarios is based on a goal of achieving constant power exchange with the grid by adjusting EV charging to a solar irradiance forecast. The model is implem...
Research on classified real-time flood forecasting framework based on K-means cluster and rough set.
Xu, Wei; Peng, Yong
2015-01-01
This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.
Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources
Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.
2017-09-01
We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction
Fuzzy rule-based forecast of meteorological drought in western Niger
Abdourahamane, Zakari Seybou; Acar, Reşat
2018-01-01
Understanding the causes of rainfall anomalies in the West African Sahel to effectively predict drought events remains a challenge. The physical mechanisms that influence precipitation in this region are complex, uncertain, and imprecise in nature. Fuzzy logic techniques are renowned to be highly efficient in modeling such dynamics. This paper attempts to forecast meteorological drought in Western Niger using fuzzy rule-based modeling techniques. The 3-month scale standardized precipitation index (SPI-3) of four rainfall stations was used as predictand. Monthly data of southern oscillation index (SOI), South Atlantic sea surface temperature (SST), relative humidity (RH), and Atlantic sea level pressure (SLP), sourced from the National Oceanic and Atmosphere Administration (NOAA), were used as predictors. Fuzzy rules and membership functions were generated using fuzzy c-means clustering approach, expert decision, and literature review. For a minimum lead time of 1 month, the model has a coefficient of determination R 2 between 0.80 and 0.88, mean square error (MSE) below 0.17, and Nash-Sutcliffe efficiency (NSE) ranging between 0.79 and 0.87. The empirical frequency distributions of the predicted and the observed drought classes are equal at the 99% of confidence level based on two-sample t test. Results also revealed the discrepancy in the influence of SOI and SLP on drought occurrence at the four stations while the effect of SST and RH are space independent, being both significantly correlated (at α based forecast model shows better forecast skills.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Directory of Open Access Journals (Sweden)
Yuanfu Mo
Full Text Available In a vehicular ad hoc network (VANET, the periodic exchange of single-hop status information broadcasts (beacon frames produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
Directory of Open Access Journals (Sweden)
Yu Fu
2018-02-01
Full Text Available Recently, bio-inspired artificial muscles based on ionic polymers have shown a bright perspective in engineering and medical research, but the inherent tremor behavior can cause instability of output response. In this paper, dynamic additional grey catastrophe prediction (DAGCP is proposed to forecast the occurrence time of tremor behavior, providing adequate preparation time for the suppression of the chitosan-based artificial muscles. DAGCP constructs various dimensions of time subsequence models under different starting points based on the threshold of tremor occurrence times and peak-to-peak values in unit time. Next, the appropriate subsequence is selected according to grey correlation degree and prediction accuracy, then it is updated with the newly generated values to achieve a real-time forecast of forthcoming tremor time. Compared with conventional grey catastrophe prediction (GCP, the proposed method has the following advantages: (1 the degradation of prediction accuracy caused by the immobilization of original parameters is prevented; (2 the dynamic input, real-time update and gradual forecast of time sequence are incorporated into the model. The experiment results show that the novel DAGCP can predict forthcoming tremor time earlier and more accurately than the conventional GCP. The generation mechanism of tremor behavior is illustrated as well.
Directory of Open Access Journals (Sweden)
Ning-bo Zhao
2014-01-01
Full Text Available Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1 Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1 model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this approach, firstly, the GM (1, 1 model is used to forecast the trend by using limited data samples. Then, Markov chain model is integrated into GM (1, 1 model in order to enhance the forecast performance, which can solve the influence of random fluctuation data on forecasting accuracy and achieving an accurate estimate of the nonlinear forecast. As an example, the historical monitoring data of exhaust gas temperature from CFM56 aeroengine of China Southern is used to verify the forecast performance of the GM (1, 1 Markov chain model. The results show that the GM (1, 1 Markov chain model is able to forecast exhaust gas temperature accurately, which can effectively reflect the random fluctuation characteristics of exhaust gas temperature changes over time.
Singh, Sanjeev Kumar; Prasad, V. S.
2018-02-01
This paper presents a systematic investigation of medium-range rainfall forecasts from two versions of the National Centre for Medium Range Weather Forecasting (NCMRWF)-Global Forecast System based on three-dimensional variational (3D-Var) and hybrid analysis system namely, NGFS and HNGFS, respectively, during Indian summer monsoon (June-September) 2015. The NGFS uses gridpoint statistical interpolation (GSI) 3D-Var data assimilation system, whereas HNGFS uses hybrid 3D ensemble-variational scheme. The analysis includes the evaluation of rainfall fields and comparisons of rainfall using statistical score such as mean precipitation, bias, correlation coefficient, root mean square error and forecast improvement factor. In addition to these, categorical scores like Peirce skill score and bias score are also computed to describe particular aspects of forecasts performance. The comparison results of mean precipitation reveal that both the versions of model produced similar large-scale feature of Indian summer monsoon rainfall for day-1 through day-5 forecasts. The inclusion of fully flow-dependent background error covariance significantly improved the wet biases in HNGFS over the Indian Ocean. The forecast improvement factor and Peirce skill score in the HNGFS have also found better than NGFS for day-1 through day-5 forecasts.
Forecasting method in multilateration accuracy based on laser tracker measurement
International Nuclear Information System (INIS)
Aguado, Sergio; Santolaria, Jorge; Samper, David; José Aguilar, Juan
2017-01-01
Multilateration based on a laser tracker (LT) requires the measurement of a set of points from three or more positions. Although the LTs’ angular information is not used, multilateration produces a volume of measurement uncertainty. This paper presents two new coefficients from which to determine whether the measurement of a set of points, before performing the necessary measurements, will improve or worsen the accuracy of the multilateration results, avoiding unnecessary measurement, and reducing the time and economic cost required. The first specific coefficient measurement coefficient (MC LT ) is unique for each laser tracker. It determines the relationship between the radial and angular laser tracker measurement noise. Similarly, the second coefficient is related with specific conditions of measurement β . It is related with the spatial angle between the laser tracker positions α and its effect on error reduction. Both parameters MC LT and β are linked in error reduction limits. Beside these, a new methodology to determine the multilateration reduction limit according to the multilateration technique of an ideal laser tracker distribution and a random one are presented. It provides general rules and advice from synthetic tests that are validated through a real test carried out in a coordinate measurement machine. (paper)
Directory of Open Access Journals (Sweden)
Y. Dzierma
2010-10-01
Full Text Available A probabilistic eruption forecast is provided for ten volcanoes of the Chilean Southern Volcanic Zone (SVZ. Since 70% of the Chilean population lives in this area, the estimation of future eruption likelihood is an important part of hazard assessment. After investigating the completeness and stationarity of the historical eruption time series, the exponential, Weibull, and log-logistic distribution functions are fit to the repose time distributions for the individual volcanoes and the models are evaluated. This procedure has been implemented in two different ways to methodologically compare details in the fitting process. With regard to the probability of at least one VEI ≥ 2 eruption in the next decade, Llaima, Villarrica and Nevados de Chillán are most likely to erupt, while Osorno shows the lowest eruption probability among the volcanoes analysed. In addition to giving a compilation of the statistical eruption forecasts along the historically most active volcanoes of the SVZ, this paper aims to give "typical" eruption probabilities, which may in the future permit to distinguish possibly enhanced activity in the aftermath of the large 2010 Concepción earthquake.
Interseismic Coupling-Based Earthquake and Tsunami Scenarios for the Nankai Trough
Baranes, H.; Woodruff, J. D.; Loveless, J. P.; Hyodo, M.
2018-04-01
Theoretical modeling and investigations of recent subduction zone earthquakes show that geodetic estimates of interseismic coupling and the spatial distribution of coseismic rupture are correlated. However, the utility of contemporary coupling in guiding construction of rupture scenarios has not been evaluated on the world's most hazardous faults. Here we demonstrate methods for scaling coupling to slip to create rupture models for southwestern Japan's Nankai Trough. Results show that coupling-based models produce distributions of ground surface deformation and tsunami inundation that are similar to historical and geologic records of the largest known Nankai earthquake in CE 1707 and to an independent, quasi-dynamic rupture model. Notably, these models and records all support focused subsidence around western Shikoku that makes the region particularly vulnerable to flooding. Results imply that contemporary coupling mirrors the slip distribution of a full-margin, 1707-type rupture, and Global Positioning System measurements of surface motion are connected with the trough's physical characteristics.
Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.
2018-06-01
Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.
International Nuclear Information System (INIS)
Cho, Sung Gook; Joe, Yang Hee
2005-01-01
By nature, the seismic fragility analysis results will be considerably affected by the statistical data of design information and site-dependent ground motions. The engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are analyzed in this paper. An improved method of seismic fragility analysis is evaluated by comparative analyses to verify its efficiency for practical application to nuclear power plant structures. The effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures are also evaluated from the comparative studies. Observing the obtained results, the proposed method is more efficient for the multi-modes structures. The case study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities
Energy Technology Data Exchange (ETDEWEB)
Cho, Sung Gook [Department of Civil and Environmental System Engineering, University of Incheon, 177 Dohwa-dong, Nam-gu, Incheon 402-749 (Korea, Republic of)]. E-mail: sgcho@incheon.ac.kr; Joe, Yang Hee [Department of Civil and Environmental System Engineering, University of Incheon, 177 Dohwa-dong, Nam-gu, Incheon 402-749 (Korea, Republic of)
2005-08-01
By nature, the seismic fragility analysis results will be considerably affected by the statistical data of design information and site-dependent ground motions. The engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are analyzed in this paper. An improved method of seismic fragility analysis is evaluated by comparative analyses to verify its efficiency for practical application to nuclear power plant structures. The effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures are also evaluated from the comparative studies. Observing the obtained results, the proposed method is more efficient for the multi-modes structures. The case study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities.
A forecast-based STDP rule suitable for neuromorphic implementation.
Davies, S; Galluppi, F; Rast, A D; Furber, S B
2012-08-01
Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that
Sakellariou, J. S.; Fassois, S. D.
2006-11-01
A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.
Comparison of SISEC code simulations with earthquake data of ordinary and base-isolated buildings
International Nuclear Information System (INIS)
Wang, C.Y.; Gvildys, J.
1991-01-01
At Argonne National Laboratory (ANL), a 3-D computer program SISEC (Seismic Isolation System Evaluation Code) is being developed for simulating the system response of isolated and ordinary structures (Wang et al. 1991). This paper describes comparison of SISEC code simulations with building response data of actual earthquakes. To ensure the accuracy of analytical simulations, recorded data of full-size reinforced concrete structures located in Sendai, Japan are used in this benchmark comparison. The test structures consist of two three-story buildings, one base-isolated and the other one ordinary founded. They were constructed side by side to investigate the effect of base isolation on the acceleration response. Among 20 earthquakes observed since April 1989, complete records of three representative earthquakes, no.2, no.6, and no.17, are used for the code validation presented in this paper. Correlations of observed and calculated accelerations at all instrument locations are made. Also, relative response characteristics of ordinary and isolated building structures are investigated. (J.P.N.)
Full base isolation for earthquake protection by helical springs and viscodampers
International Nuclear Information System (INIS)
Hueffmann, G.K.
1985-01-01
GERB, a company specializing in vibration isolation has developed a new system for the three dimensional earthquake protection of whole structures, based on helical springs with definite linear flexibility of similar order in all three dimensions and velocity proportional viscodampers, also highly effective in all degrees of freedom. This system has already been successfully used for the installation of big diesel- and turbo-generators in seismic zones for quite a long time, where earthquake protection has been combined with conventional vibration control concepts. Tests on the shaking table of the Earthquake Research Institute at Skopje/Yugoslavia with a model of a 5-story-steel-frame-building comparing a fixed base and spring viscodamper supported installation have shown high stress relief in the structure at limited amplitudes. This system will give not only more protection for buildings and the people inside, but the extra cost equals the savings in the structure. Some unique advantages of this system are: no creep, deterioration or fatigue with time, easy inspection, simple replacement of elements if necessary and also simple modification of the system for example in case of load changes, static uncoupling from the subfoundation (independence of settlements) and low influence of travelling wave effects. (orig.)
The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network
Chen, M.; Wang, X.; Dou, A.; Wu, X.
2018-04-01
The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.
Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu
2018-02-01
Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.
Prediction of the area affected by earthquake-induced landsliding based on seismological parameters
Marc, Odin; Meunier, Patrick; Hovius, Niels
2017-07-01
We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.
A neutral network based technique for short-term forecasting of anomalous load periods
Energy Technology Data Exchange (ETDEWEB)
Sforna, M [ENEL, s.p.a, Italian Power Company (Italy); Lamedica, R; Prudenzi, A [Rome Univ. ` La Sapienza` , Rome (Italy); Caciotta, M; Orsolini Cencelli, V [Rome Univ. III, Rome (Italy)
1995-01-01
The paper illustrates a part of the research activity conducted by authors in the field of electric Short Term Load Forecasting (STLF) based on Artificial Neural Network (ANN) architectures. Previous experiences with basic ANN architectures have shown that, even though these architecture provide results comparable with those obtained by human operators for most normal days, they evidence some accuracy deficiencies when applied to `anomalous` load conditions occurring during holidays and long weekends. For these periods a specific procedure based upon a combined (unsupervised/supervised) approach has been proposed. The unsupervised stage provides a preventive classification of the historical load data by means of a Kohonen`s Self Organizing Map (SOM). The supervised stage, performing the proper forecasting activity, is obtained by using a multi-layer percept ron with a back propagation learning algorithm similar to the ones above mentioned. The unconventional use of information deriving from the classification stage permits the proposed procedure to obtain a relevant enhancement of the forecast accuracy for anomalous load situations.
Medium-term electric power demand forecasting based on economic-electricity transmission model
Li, Wenfeng; Bao, Fangmin; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Mao, Yubin; Wang, Jiangbo; Liu, Junhui
2018-06-01
Electric demand forecasting is a basic work to ensure the safe operation of power system. Based on the theories of experimental economics and econometrics, this paper introduces Prognoz Platform 7.2 intelligent adaptive modeling platform, and constructs the economic electricity transmission model that considers the economic development scenarios and the dynamic adjustment of industrial structure to predict the region's annual electricity demand, and the accurate prediction of the whole society's electricity consumption is realized. Firstly, based on the theories of experimental economics and econometrics, this dissertation attempts to find the economic indicator variables that drive the most economical growth of electricity consumption and availability, and build an annual regional macroeconomic forecast model that takes into account the dynamic adjustment of industrial structure. Secondly, it innovatively put forward the economic electricity directed conduction theory and constructed the economic power transfer function to realize the group forecast of the primary industry + rural residents living electricity consumption, urban residents living electricity, the second industry electricity consumption, the tertiary industry electricity consumption; By comparing with the actual value of economy and electricity in Henan province in 2016, the validity of EETM model is proved, and the electricity consumption of the whole province from 2017 to 2018 is predicted finally.
Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data
Directory of Open Access Journals (Sweden)
Wei-Kuang Lai
2016-02-01
Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.
Balavalikar, Supreetha; Nayak, Prabhakar; Shenoy, Narayan; Nayak, Krishnamurthy
2018-04-01
The decline in groundwater is a global problem due to increase in population, industries, and environmental aspects such as increase in temperature, decrease in overall rainfall, loss of forests etc. In Udupi district, India, the water source fully depends on the River Swarna for drinking and agriculture purposes. Since the water storage in Bajae dam is declining day-by-day and the people of Udupi district are under immense pressure due to scarcity of drinking water, alternatively depend on ground water. As the groundwater is being heavily used for drinking and agricultural purposes, there is a decline in its water table. Therefore, the groundwater resources must be identified and preserved for human survival. This research proposes a data driven approach for forecasting the groundwater level. The monthly variations in groundwater level and rainfall data in three observation wells located in Brahmavar, Kundapur and Hebri were investigated and the scenarios were examined for 2000-2013. The focus of this research work is to develop an ANN based groundwater level forecasting model and compare with hybrid ANN-PSO forecasting model. The model parameters are tested using different combinations of the data. The results reveal that PSO-ANN based hybrid model gives a better prediction accuracy, than ANN alone.
Prototypes of risk-based flood forecasting systems in the Netherlands and Italy
Directory of Open Access Journals (Sweden)
Bachmann D.
2016-01-01
Full Text Available Flood forecasting, warning and emergency response are important components of flood management. Currently, the model-based prediction of discharge and/or water level in a river is common practice for operational flood forecasting. Based on the prediction of these values decisions about specific emergency measures are made within emergency response. However, the information provided for decision support is often restricted to pure hydrological or hydraulic aspects of a flood. Information about weak sections within the flood defences, flood prone areas and assets at risk in the protected areas are rarely used in current early warning and response systems. This information is often available for strategic planning, but is not in an appropriate format for operational purposes. This paper presents the extension of existing flood forecasting systems with elements of strategic flood risk analysis, such as probabilistic failure analysis, two dimensional flood spreading simulation and the analysis of flood impacts and consequences. This paper presents the first results from two prototype applications of the new developed concept: The first prototype is applied to the Rotterdam area situated in the western part of the Netherlands. The second pilot study focusses on a rural area between the cities of Mantua and Ferrara along the Po river (Italy.
A Novel Clustering Model Based on Set Pair Analysis for the Energy Consumption Forecast in China
Directory of Open Access Journals (Sweden)
Mingwu Wang
2014-01-01
Full Text Available The energy consumption forecast is important for the decision-making of national economic and energy policies. But it is a complex and uncertainty system problem affected by the outer environment and various uncertainty factors. Herein, a novel clustering model based on set pair analysis (SPA was introduced to analyze and predict energy consumption. The annual dynamic relative indicator (DRI of historical energy consumption was adopted to conduct a cluster analysis with Fisher’s optimal partition method. Combined with indicator weights, group centroids of DRIs for influence factors were transferred into aggregating connection numbers in order to interpret uncertainty by identity-discrepancy-contrary (IDC analysis. Moreover, a forecasting model based on similarity to group centroid was discussed to forecast energy consumption of a certain year on the basis of measured values of influence factors. Finally, a case study predicting China’s future energy consumption as well as comparison with the grey method was conducted to confirm the reliability and validity of the model. The results indicate that the method presented here is more feasible and easier to use and can interpret certainty and uncertainty of development speed of energy consumption and influence factors as a whole.
A train dispatching model based on fuzzy passenger demand forecasting during holidays
Directory of Open Access Journals (Sweden)
Fei Dou Dou
2013-03-01
Full Text Available Abstract: Purpose: The train dispatching is a crucial issue in the train operation adjustment when passenger flow outbursts. During holidays, the train dispatching is to meet passenger demand to the greatest extent, and ensure safety, speediness and punctuality of the train operation. In this paper, a fuzzy passenger demand forecasting model is put up, then a train dispatching optimization model is established based on passenger demand so as to evacuate stranded passengers effectively during holidays. Design/methodology/approach: First, the complex features and regularity of passenger flow during holidays are analyzed, and then a fuzzy passenger demand forecasting model is put forward based on the fuzzy set theory and time series theory. Next, the bi-objective of the train dispatching optimization model is to minimize the total operation cost of the train dispatching and unserved passenger volume during holidays. Finally, the validity of this model is illustrated with a case concerned with the Beijing-Shanghai high-speed railway in China. Findings: The case study shows that the fuzzy passenger demand forecasting model can predict outcomes more precisely than ARIMA model. Thus train dispatching optimization plan proves that a small number of trains are able to serve unserved passengers reasonably and effectively. Originality/value: On the basis of the passenger demand predictive values, the train dispatching optimization model is established, which enables train dispatching to meet passenger demand in condition that passenger flow outbursts, so as to maximize passenger demand by offering the optimal operation plan.
Ding, Chuan; Wang, Kaihong; Huang, Xiaoying
2014-01-01
In a distribution channel, channel members are not always self-interested, but altruistic in some conditions. Based on this assumption, this paper adopts a behavior game method to analyze and forecast channel members’ decision behavior based on result fairness preference and reciprocal fairness preference by embedding a fair preference theory in channel research of coordination. The behavior game forecasts that a channel can achieve coordination if channel members consider behavior elemen...
A new Bayesian Inference-based Phase Associator for Earthquake Early Warning
Meier, Men-Andrin; Heaton, Thomas; Clinton, John; Wiemer, Stefan
2013-04-01
State of the art network-based Earthquake Early Warning (EEW) systems can provide warnings for large magnitude 7+ earthquakes. Although regions in the direct vicinity of the epicenter will not receive warnings prior to damaging shaking, real-time event characterization is available before the destructive S-wave arrival across much of the strongly affected region. In contrast, in the case of the more frequent medium size events, such as the devastating 1994 Mw6.7 Northridge, California, earthquake, providing timely warning to the smaller damage zone is more difficult. For such events the "blind zone" of current systems (e.g. the CISN ShakeAlert system in California) is similar in size to the area over which severe damage occurs. We propose a faster and more robust Bayesian inference-based event associator, that in contrast to the current standard associators (e.g. Earthworm Binder), is tailored to EEW and exploits information other than only phase arrival times. In particular, the associator potentially allows for reliable automated event association with as little as two observations, which, compared to the ShakeAlert system, would speed up the real-time characterizations by about ten seconds and thus reduce the blind zone area by up to 80%. We compile an extensive data set of regional and teleseismic earthquake and noise waveforms spanning a wide range of earthquake magnitudes and tectonic regimes. We pass these waveforms through a causal real-time filterbank with passband filters between 0.1 and 50Hz, and, updating every second from the event detection, extract the maximum amplitudes in each frequency band. Using this dataset, we define distributions of amplitude maxima in each passband as a function of epicentral distance and magnitude. For the real-time data, we pass incoming broadband and strong motion waveforms through the same filterbank and extract an evolving set of maximum amplitudes in each passband. We use the maximum amplitude distributions to check
A simulated-based neural network algorithm for forecasting electrical energy consumption in Iran
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Sohrabkhani, S.
2008-01-01
This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on artificial neural network (ANN), computer simulation and design of experiments using stochastic procedures. First, an ANN approach is illustrated based on supervised multi-layer perceptron (MLP) network for the electrical consumption forecasting. The chosen model, therefore, can be compared to that of estimated by time series model. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The simulated-based ANN model is then developed. Therefore, there are four treatments to be considered in analysis of variance (ANOVA), which are actual data, time series, ANN and simulated-based ANN. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being statistically equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan method (DMRT) of paired comparison is used to select the optimum model which could be time series, ANN or simulated-based ANN. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best fitted ANN model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that ANN always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to
Directory of Open Access Journals (Sweden)
Jianzhou Wang
2014-01-01
Full Text Available Electricity price forecasting holds very important position in the electricity market. Inaccurate price forecasting may cause energy waste and management chaos in the electricity market. However, electricity price forecasting has always been regarded as one of the largest challenges in the electricity market because it shows high volatility, which makes electricity price forecasting difficult. This paper proposes the use of artificial intelligence optimization combination forecasting models based on preprocessing data, called “chaos particles optimization (CPSO weight-determined combination models.” These models allow for the weight of the combined model to take values of [-1,1]. In the proposed models, the density-based spatial clustering of applications with noise (DBSCAN algorithm is used to identify outliers, and the outliers are replaced by a new data-produced linear interpolation function. The proposed CPSO weight-determined combination models are then used to forecast the projected future electricity price. In this case study, the electricity price data of South Australia are simulated. The results indicate that, while the weight of the combined model takes values of [-1,1], the proposed combination model can always provide adaptive, reliable, and comparatively accurate forecast results in comparison to traditional combination models.
Effects of Breathing-Based Meditation on Earthquake-Affected Health Professionals.
Iwakuma, Miho; Oshita, Daien; Yamamoto, Akihiro; Urushibara-Miyachi, Yuka
On March 11, 2013, the Great East Japan Earthquake (magnitude 9) hit the northern part of Japan (Tohoku), killing more than 15 000 people and leaving long-lasting scars, including psychological damage among evacuees, some of whom were health professionals. Little is known about meditation efficacy on disaster-affected health professionals. The present study investigated the effects of breathing-based meditation on seminar participants who were health professionals who had survived the earthquake. This study employed a mixed methods approach, using both survey data and handwritten qualitative data. Quantitative results of pre- and postmeditation practice indicated that all mood scales (anger, confusion, depression, fatigue, strain, and vigor) were significantly improved (N = 17). Qualitative results revealed several common themes (emancipation from chronic and bodily senses; holistic sense: transcending mind-body; re-turning an axis in life through reflection, self-control, and/or gratitude; meditation into mundane, everyday life; and coming out of pain in the aftermath of the earthquake) that had emerged as expressions of participant meditation experiences. Following the 45-minute meditation session, the present study participants reported improvements in all psychological states (anger, confusion, depression, fatigue, strain, and vigor) in the quantitative portion, which indicated efficacy of the meditation. Our analysis of the qualitative portion revealed what and how participants felt during meditating.
Karasozen, E.; Nissen, E.; Bergman, E. A.; Walters, R. J.
2013-12-01
Western Turkey is a rapidly deforming region with a long history of high-magnitude normal faulting earthquakes. However, the locations and slip rates of the responsible faults are poorly constrained. Here, we reassess a series of large instrumental earthquakes in the Simav-Gediz region, an area exhibiting a strong E-W gradient in N-S extension rates, from low rates bordering the Anatolian Plateau to much higher rates in the west. We start with investigating a recent Mw 5.9 earthquake at Simav (19 May 2011) using InSAR, teleseismic body-wave modeling and field observations. Next, we exploit the small but clear InSAR signal to relocate a series of older, larger earthquakes, using a calibrated earthquake relocation method which is based on the hypocentroidial decomposition (HDC) method for multiple event relocation. These improved locations in turn provide an opportunity to reassess the regional style of deformation. One interesting aspect of these earthquakes is that the largest (the Mw 7.2 Gediz earthquake, March 1970) occurred in an area of slow extension and indistinct surface faulting, whilst the well-defined and more rapidly extending Simav graben has ruptured in several smaller, Mw 6 events. However, our relocations highlight the existence of a significant gap in instrumental earthquakes along the central Simav graben, which, if it ruptured in a single event, could equal ~Mw 7. We were unable to identify fault scarps along this section due to dense vegetation and human modification, and we suggest that acquiring LiDAR data in this area should be a high priority in order to properly investigate earthquake hazard in the Simav graben.
MEMS-based sensors for post-earthquake damage assessment
Energy Technology Data Exchange (ETDEWEB)
Pozzi, M; Zonta, D; Trapani, D [DIMS, University of Trento, Via Mesiano 77, 38123, Trento (Italy); Athanasopoulos, N; Garetsos, A; Stratakos, Y E [Advanced Microwave Systems Ltd, 2, 25th Martiou Street, 17778 Athens (Greece); Amditis, A J; Bimpas, M [ICCS, National Technical University of Athens, 9 Iroon Polytechniou Street, 15773 Zografou (Greece); Ulieru, D, E-mail: daniele.zonta@unitn.it [SITEX 45 SRL, 114 Ghica Tei Blvd, 72235 Bucharest (Romania)
2011-07-19
The evaluation of seismic damage is today almost exclusively based on visual inspection, as building owners are generally reluctant to install permanent sensing systems, due to their high installation, management and maintenance costs. To overcome this limitation, the EU-funded MEMSCON project aims to produce small size sensing nodes for measurement of strain and acceleration, integrating Micro-Electro-Mechanical Systems (MEMS) based sensors and Radio Frequency Identification (RFID) tags in a single package that will be attached to reinforced concrete buildings and will transmit data using a wireless interface. During the first phase of the project completed so far, sensor prototypes were produced by assembling preexisting components. This paper outlines the device operating principles, production scheme and operation at both unit and network levels. It also reports on validation campaigns conducted in the laboratory to assess system performance. Accelerometer sensors were tested on a reduced scale metal frame mounted on a shaking table, while strain sensors were embedded in both reduced and full-scale reinforced concrete specimens undergoing increasing deformation cycles up to extensive damage and collapse. The performance of the sensors developed for the project and their applicability to long-term seismic monitoring are discussed.
Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain
Dai, Yonghui; Han, Dongmei; Dai, Weihui
2014-01-01
The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market. PMID:24782659
Directory of Open Access Journals (Sweden)
Shuping Cai
2018-03-01
Full Text Available Weather information is an important factor in short-term load forecasting (STLF. However, for a long time, more importance has always been attached to forecasting models instead of other processes such as the introduction of weather factors or feature selection for STLF. The main aim of this paper is to develop a novel methodology based on Fisher information for meteorological variables introduction and variable selection in STLF. Fisher information computation for one-dimensional and multidimensional weather variables is first described, and then the introduction of meteorological factors and variables selection for STLF models are discussed in detail. On this basis, different forecasting models with the proposed methodology are established. The proposed methodology is implemented on real data obtained from Electric Power Utility of Zhenjiang, Jiangsu Province, in southeast China. The results show the advantages of the proposed methodology in comparison with other traditional ones regarding prediction accuracy, and it has very good practical significance. Therefore, it can be used as a unified method for introducing weather variables into STLF models, and selecting their features.
Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain
Directory of Open Access Journals (Sweden)
Yonghui Dai
2014-01-01
Full Text Available The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.
Directory of Open Access Journals (Sweden)
Idris Khan
2017-01-01
Full Text Available High concentration of greenhouse gases in the atmosphere has increased dependency on photovoltaic (PV power, but its random nature poses a challenge for system operators to precisely predict and forecast PV power. The conventional forecasting methods were accurate for clean weather. But when the PV plants worked under heavy haze, the radiation is negatively impacted and thus reducing PV power; therefore, to deal with haze weather, Air Quality Index (AQI is introduced as a parameter to predict PV power. AQI, which is an indication of how polluted the air is, has been known to have a strong correlation with power generated by the PV panels. In this paper, a hybrid method based on the model of conventional back propagation (BP neural network for clear weather and BP AQI model for haze weather is used to forecast PV power with conventional parameters like temperature, wind speed, humidity, solar radiation, and an extra parameter of AQI as input. The results show that the proposed method has less error under haze condition as compared to conventional model of neural network.
Corzo, Gerald; Solomatine, Dimitri
2007-05-01
Natural phenomena are multistationary and are composed of a number of interacting processes, so one single model handling all processes often suffers from inaccuracies. A solution is to partition data in relation to such processes using the available domain knowledge or expert judgment, to train separate models for each of the processes, and to merge them in a modular model (committee). In this paper a problem of water flow forecast in watershed hydrology is considered where the flow process can be presented as consisting of two subprocesses -- base flow and excess flow, so that these two processes can be separated. Several approaches to data separation techniques are studied. Two case studies with different forecast horizons are considered. Parameters of the algorithms responsible for data partitioning are optimized using genetic algorithms and global pattern search. It was found that modularization of ANN models using domain knowledge makes models more accurate, if compared with a global model trained on the whole data set, especially when forecast horizon (and hence the complexity of the modelled processes) is increased.
Weather Research and Forecasting Model Wind Sensitivity Study at Edwards Air Force Base, CA
Watson, Leela R.; Bauman, William H., III; Hoeth, Brian
2009-01-01
This abstract describes work that will be done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting "wind cycling" cases at Edwards Air Force Base, CA (EAFB), in which the wind speeds and directions oscillate among towers near the EAFB runway. The Weather Research and Forecasting (WRF) model allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. The goal of this project is to assess the different configurations available and determine which configuration will best predict surface wind speed and direction at EAFB.
Methods and tools to support real time risk-based flood forecasting - a UK pilot application
Directory of Open Access Journals (Sweden)
Brown Emma
2016-01-01
Full Text Available Flood managers have traditionally used probabilistic models to assess potential flood risk for strategic planning and non-operational applications. Computational restrictions on data volumes and simulation times have meant that information on the risk of flooding has not been available for operational flood forecasting purposes. In practice, however, the operational flood manager has probabilistic questions to answer, which are not completely supported by the outputs of traditional, deterministic flood forecasting systems. In a collaborative approach, HR Wallingford and Deltares have developed methods, tools and techniques to extend existing flood forecasting systems with elements of strategic flood risk analysis, including probabilistic failure analysis, two dimensional flood spreading simulation and the analysis of flood impacts and consequences. This paper presents the results of the application of these new operational flood risk management tools to a pilot catchment in the UK. It discusses the problems of performing probabilistic flood risk assessment in real time and how these have been addressed in this study. It also describes the challenges of the communication of risk to operational flood managers and to the general public, and how these new methods and tools can provide risk-based supporting evidence to assist with this process.
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
Energy Technology Data Exchange (ETDEWEB)
Huang, Shengzhi; Ming, Bo; Huang, Qiang; Leng, Guoyong; Hou, Beibei
2017-05-05
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecasting models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.
Microcontroller-based network for meteorological sensing and weather forecast calculations
Directory of Open Access Journals (Sweden)
A. Vas
2012-06-01
Full Text Available Weather forecasting needs a lot of computing power. It is generally accomplished by using supercomputers which are expensive to rent and to maintain. In addition, weather services also have to maintain radars, balloons and pay for worldwide weather data measured by stations and satellites. Weather forecasting computations usually consist of solving differential equations based on the measured parameters. To do that, the computer uses the data of close and distant neighbor points. Accordingly, if small-sized weather stations, which are capable of making measurements, calculations and communication, are connected through the Internet, then they can be used to run weather forecasting calculations like a supercomputer does. It doesn’t need any central server to achieve this, because this network operates as a distributed system. We chose Microchip’s PIC18 microcontroller (μC platform in the implementation of the hardware, and the embedded software uses the TCP/IP Stack v5.41 provided by Microchip.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Skjærbæk, P. S.; Köylüoglu, H. U.
The paper deals with the prediction of global damage and future structural reliability with special emphasis on sensitivity, bias and uncertainty of these predictions dependent on the statistically equivalent realizations of the future earthquake. The predictions are based on a modified Clough......-Johnston single-degree-of-freedom (SDOF) oscillator with three parameters which are calibrated to fit the displacement response and the damage development in the past earthquake....
Directory of Open Access Journals (Sweden)
Yi Wang
2016-12-01
Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.
An interdisciplinary approach to study Pre-Earthquake processes
Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.
2017-12-01
We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.
Directory of Open Access Journals (Sweden)
Shuai Xie
2016-09-01
Full Text Available Remote sensing (RS images play a significant role in disaster emergency response. Web2.0 changes the way data are created, making it possible for the public to participate in scientific issues. In this paper, an experiment is designed to evaluate the reliability of crowdsourcing buildings collapse assessment in the early time after an earthquake based on aerial remote sensing image. The procedure of RS data pre-processing and crowdsourcing data collection is presented. A probabilistic model including maximum likelihood estimation (MLE, Bayes’ theorem and expectation-maximization (EM algorithm are applied to quantitatively estimate the individual error-rate and “ground truth” according to multiple participants’ assessment results. An experimental area of Yushu earthquake is provided to present the results contributed by participants. Following the results, some discussion is provided regarding accuracy and variation among participants. The features of buildings labeled as the same damage type are found highly consistent. This suggests that the building damage assessment contributed by crowdsourcing can be treated as reliable samples. This study shows potential for a rapid building collapse assessment through crowdsourcing and quantitatively inferring “ground truth” according to crowdsourcing data in the early time after the earthquake based on aerial remote sensing image.
Arellano-Baeza, A. A.; Garcia, R. V.; Trejo-Soto, M.
2007-01-01
Over the last decades strong efforts have been made to apply new spaceborn technologies to the study and possible forecast of strong earthquakes. In this study we use ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. A lineament is a straight or a somewhat curved feature in an image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. ...
Directory of Open Access Journals (Sweden)
Murat Luy
2018-05-01
Full Text Available The estimation of hourly electricity load consumption is highly important for planning short-term supply–demand equilibrium in sources and facilities. Studies of short-term load forecasting in the literature are categorized into two groups: classical conventional and artificial intelligence-based methods. Artificial intelligence-based models, especially when using fuzzy logic techniques, have more accurate load estimations when datasets include high uncertainty. However, as the knowledge base—which is defined by expert insights and decisions—gets larger, the load forecasting performance decreases. This study handles the problem that is caused by the growing knowledge base, and improves the load forecasting performance of fuzzy models through nature-inspired methods. The proposed models have been optimized by using ant colony optimization and genetic algorithm (GA techniques. The training and testing processes of the proposed systems were performed on historical hourly load consumption and temperature data collected between 2011 and 2014. The results show that the proposed models can sufficiently improve the performance of hourly short-term load forecasting. The mean absolute percentage error (MAPE of the monthly minimum in the forecasting model, in terms of the forecasting accuracy, is 3.9% (February 2014. The results show that the proposed methods make it possible to work with large-scale rule bases in a more flexible estimation environment.
Energy Technology Data Exchange (ETDEWEB)
Jiang, Huaiguang; Zhang, Yingchen
2016-11-14
This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.
Global Drought Monitoring and Forecasting based on Satellite Data and Land Surface Modeling
Sheffield, J.; Lobell, D. B.; Wood, E. F.
2010-12-01
objective quantification and tracking of their spatial-temporal characteristics. Further we present strategies for merging various sources of information, including bias correction of satellite precipitation and assimilation of remotely sensed soil moisture, which can augment the monitoring in regions where satellite precipitation is most uncertain. Ongoing work is adding a drought forecast component based on a successful implementation over the U.S. and agricultural productivity estimates based on output from crop yield models. The forecast component uses seasonal global climate forecasts from the NCEP Climate Forecast System (CFS). These are merged with observed climatology in a Bayesian framework to produce ensemble atmospheric forcings that better capture the uncertainties. At the same time, the system bias corrects and downscales the monthly CFS data. We show some initial seasonal (up to 6-month lead) hydrologic forecast results for the African system. Agricultural monitoring is based on the precipitation, temperature and soil moisture from the system to force statistical and process based crop yield models. We demonstrate the feasibility of monitoring major crop types across the world and show a strategy for providing predictions of yields within our drought forecast mode.
Indah, F. P.; Syafriani, S.; Andiyansyah, Z. S.
2018-04-01
Sumatra is in an active subduction zone between the indo-australian plate and the eurasian plate and is located at a fault along the sumatra fault so that sumatra is vulnerable to earthquakes. One of the ways to find out the cause of earthquake can be done by identifying the type of earthquake-causing faults based on earthquake of focal mechanism. The data used to identify the type of fault cause of earthquake is the earth tensor moment data which is sourced from global cmt period 1976-2016. The data used in this research using magnitude m ≥ 6 sr. This research uses gmt software (generic mapping tolls) to describe the form of fault. From the research result, it is found that the characteristics of fault field that formed in every region in sumatera island based on data processing and data of earthquake history of 1976-2016 period that the type of fault in sumatera fault is strike slip, fault type in mentawai fault is reverse fault (rising faults) and dip-slip, while the fault type in the subduction zone is dip-slip.
Zhao, Ning-bo; Yang, Jia-long; Li, Shu-ying; Sun, Yue-wu
2014-01-01
Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1) Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1) model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this ap...
Qingyou Yan; Chao Qin; Mingjian Nie; Le Yang
2018-01-01
Due to the deregulation of retail electricity market, consumers can choose retail electric suppliers freely, and market entities are facing fierce competition because of the increasing number of new entrants. Under these circumstances, forecasting the changes in all market entities, when market share stabilized, is important for suppliers making marketing decisions. In this paper, a market share forecasting model was established based on Markov chain, and a system dynamics model was construct...
Seismic hazard assessment based on the Unified Scaling Law for Earthquakes: the Greater Caucasus
Nekrasova, A.; Kossobokov, V. G.
2015-12-01
Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. The parameters A, B, and C of USLE are used to estimate, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters including macro-seismic intensity. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks (e.g., those based on the density of exposed population). The methodology of seismic hazard and risks assessment based on USLE is illustrated by application to the seismic region of Greater Caucasus.
The NLS-Based Nonlinear Grey Multivariate Model for Forecasting Pollutant Emissions in China
Directory of Open Access Journals (Sweden)
Ling-Ling Pei
2018-03-01
Full Text Available The relationship between pollutant discharge and economic growth has been a major research focus in environmental economics. To accurately estimate the nonlinear change law of China’s pollutant discharge with economic growth, this study establishes a transformed nonlinear grey multivariable (TNGM (1, N model based on the nonlinear least square (NLS method. The Gauss–Seidel iterative algorithm was used to solve the parameters of the TNGM (1, N model based on the NLS basic principle. This algorithm improves the precision of the model by continuous iteration and constantly approximating the optimal regression coefficient of the nonlinear model. In our empirical analysis, the traditional grey multivariate model GM (1, N and the NLS-based TNGM (1, N models were respectively adopted to forecast and analyze the relationship among wastewater discharge per capita (WDPC, and per capita emissions of SO2 and dust, alongside GDP per capita in China during the period 1996–2015. Results indicated that the NLS algorithm is able to effectively help the grey multivariable model identify the nonlinear relationship between pollutant discharge and economic growth. The results show that the NLS-based TNGM (1, N model presents greater precision when forecasting WDPC, SO2 emissions and dust emissions per capita, compared to the traditional GM (1, N model; WDPC indicates a growing tendency aligned with the growth of GDP, while the per capita emissions of SO2 and dust reduce accordingly.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
Forecasting in Complex Systems
Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.
2014-12-01
Complex nonlinear systems are typically characterized by many degrees of freedom, as well as interactions between the elements. Interesting examples can be found in the areas of earthquakes and finance. In these two systems, fat tails play an important role in the statistical dynamics. For earthquake systems, the Gutenberg-Richter magnitude-frequency is applicable, whereas for daily returns for the securities in the financial markets are known to be characterized by leptokurtotic statistics in which the tails are power law. Very large fluctuations are present in both systems. In earthquake systems, one has the example of great earthquakes such as the M9.1, March 11, 2011 Tohoku event. In financial systems, one has the example of the market crash of October 19, 1987. Both were largely unexpected events that severely impacted the earth and financial systems systemically. Other examples include the M9.3 Andaman earthquake of December 26, 2004, and the Great Recession which began with the fall of Lehman Brothers investment bank on September 12, 2013. Forecasting the occurrence of these damaging events has great societal importance. In recent years, national funding agencies in a variety of countries have emphasized the importance of societal relevance in research, and in particular, the goal of improved forecasting technology. Previous work has shown that both earthquakes and financial crashes can be described by a common Landau-Ginzburg-type free energy model. These metastable systems are characterized by fat tail statistics near the classical spinodal. Correlations in these systems can grow and recede, but do not imply causation, a common source of misunderstanding. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In this talk, we describe the basic phenomenology of these systems and emphasize their similarities and differences. We also consider the problem of forecast validation and verification
An, L.; Zhang, J.; Gong, L.
2018-04-01
Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.
International Nuclear Information System (INIS)
Han Shaoqing; Li Xihai; Song Zibiao; Liu Daizhi
2007-01-01
The synergetic pattern recognition is a new way of pattern recognition with many excellent features such as noise resistance and deformity resistance. But when it is used in the discrimination between nuclear explosion and earthquake using existing methods of prototype selection, the results are not satisfying. A new method of prototype selection based on FCM is proposed in this paper. First, each group of training samples is clustered into c groups using FCM; then c barycenters or centers are chosen as prototypes. Experiment results show that compared with existing methods of prototype selection this new method is effective and it increases the recognition ratio greatly. (authors)
The Northridge earthquake: community-based approaches to unmet recovery needs.
Bolin, R; Stanford, L
1998-03-01
The 1994 Northridge, California earthquake has proven to be one of the most costly disasters in United States history. Federal and state assistance programmes received some 681,000 applications from victims for various forms of relief. In spite of the flow of US$11 billion in federal assistance into Los Angeles and Ventura counties, many victims have failed to obtain adequate relief. These unmet needs relate to the vulnerability of particular class and ethnic groups. In response to unmet needs, a number of non-governmental organisations (NGOs) have become involved in the recovery process. This paper, based on evidence collected from hundreds of in-depth interviews with the people involved, examines the activities of several community-based organisations (CBOs) and other NGOs as they have attempted to assist vulnerable people with unmet post-disaster needs. We discuss two small ethnically diverse communities in Ventura County, on the periphery of the Los Angeles metropolitan region. The earthquake and resultant disaster declaration provided an opportunity for local government and NGOs to acquire federal resources not normally available for economic development. At the same time the earthquake created political openings in which longer-term issues of community development could be addressed by various local stakeholders. A key issue in recovery has been the availability of affordable housing for those on low incomes, particularly Latinos, the elderly and farm workers. We discuss the successes and limitations of CBOs and NGOs as mechanisms for dealing with vulnerable populations, unmet needs and recovery issues in the two communities.
The regional geological hazard forecast based on rainfall and WebGIS in Hubei, China
Zheng, Guizhou; Chao, Yi; Xu, Hongwen
2008-10-01
Various disasters have been a serious threat to human and are increasing over time. The reduction and prevention of hazard is the largest problem faced by local governments. The study of disasters has drawn more and more attention mainly due to increasing awareness of the socio-economic impact of disasters. Hubei province, one of the highest economic developing provinces in China, suffered big economic losses from geo-hazards in recent years due to frequent geo-hazard events with the estimated damage of approximately 3000 million RMB. It is therefore important to establish an efficient way to mitigate potential damage and reduce losses of property and life derived from disasters. This paper presents the procedure of setting up a regional geological hazard forecast and information releasing system of Hubei province with the combination of advanced techniques such as World Wide Web (WWW), database online and ASP based on WEBGIS platform (MAPGIS-IMS) and rainfall information. A Web-based interface was developed using a three-tiered architecture based on client-server technology in this system. The study focused on the upload of the rainfall data, the definition of rainfall threshold values, the creation of geological disaster warning map and the forecast of geohazard relating to the rainfall. Its purposes are to contribute to the management of mass individual and regional geological disaster spatial data, help to forecast the conditional probabilities of occurrence of various disasters that might be posed by the rainfall, and release forecasting information of Hubei province timely via the internet throughout all levels of government, the private and nonprofit sectors, and the academic community. This system has worked efficiently and stably in the internet environment which is strongly connected with meteorological observatory. Environment Station of Hubei Province are making increased use of our Web-tool to assist in the decision-making process to analyze geo
International Nuclear Information System (INIS)
Shayeghi, H.; Ghasemi, A.
2013-01-01
Highlights: • Presenting a hybrid CGSA-LSSVM scheme for price forecasting. • Considering uncertainties for filtering in input data and feature selection to improve efficiency. • Using DWT input featured LSSVM approach to classify next-week prices. • Used three real markets to illustrate performance of the proposed price forecasting model. - Abstract: At the present time, day-ahead electricity market is closely associated with other commodity markets such as fuel market and emission market. Under such an environment, day-ahead electricity price forecasting has become necessary for power producers and consumers in the current deregulated electricity markets. Seeking for more accurate price forecasting techniques, this paper proposes a new combination of a Feature Selection (FS) technique based mutual information (MI) technique and Wavelet Transform (WT) in this study. Moreover, in this paper a new modified version of Gravitational Search Algorithm (GSA) optimization based chaos theory, namely Chaotic Gravitational Search Algorithm (CGSA) is developed to find the optimal parameters of Least Square Support Vector Machine (LSSVM) to predict electricity prices. The performance and price forecast accuracy of the proposed technique is assessed by means of real data from Iran’s, Ontario’s and Spain’s price markets. The simulation results from numerical tables and figures in different cases show that the proposed technique increases electricity price market forecasting accuracy than the other classical and heretical methods in the scientific researches
Short-Term Wind Power Forecasting Based on Clustering Pre-Calculated CFD Method
Directory of Open Access Journals (Sweden)
Yimei Wang
2018-04-01
Full Text Available To meet the increasing wind power forecasting (WPF demands of newly built wind farms without historical data, physical WPF methods are widely used. The computational fluid dynamics (CFD pre-calculated flow fields (CPFF-based WPF is a promising physical approach, which can balance well the competing demands of computational efficiency and accuracy. To enhance its adaptability for wind farms in complex terrain, a WPF method combining wind turbine clustering with CPFF is first proposed where the wind turbines in the wind farm are clustered and a forecasting is undertaken for each cluster. K-means, hierarchical agglomerative and spectral analysis methods are used to establish the wind turbine clustering models. The Silhouette Coefficient, Calinski-Harabaz index and within-between index are proposed as criteria to evaluate the effectiveness of the established clustering models. Based on different clustering methods and schemes, various clustering databases are built for clustering pre-calculated CFD (CPCC-based short-term WPF. For the wind farm case studied, clustering evaluation criteria show that hierarchical agglomerative clustering has reasonable results, spectral clustering is better and K-means gives the best performance. The WPF results produced by different clustering databases also prove the effectiveness of the three evaluation criteria in turn. The newly developed CPCC model has a much higher WPF accuracy than the CPFF model without using clustering techniques, both on temporal and spatial scales. The research provides supports for both the development and improvement of short-term physical WPF systems.
Bayesian inference of earthquake parameters from buoy data using a polynomial chaos-based surrogate
Giraldi, Loic
2017-04-07
This work addresses the estimation of the parameters of an earthquake model by the consequent tsunami, with an application to the Chile 2010 event. We are particularly interested in the Bayesian inference of the location, the orientation, and the slip of an Okada-based model of the earthquake ocean floor displacement. The tsunami numerical model is based on the GeoClaw software while the observational data is provided by a single DARTⓇ buoy. We propose in this paper a methodology based on polynomial chaos expansion to construct a surrogate model of the wave height at the buoy location. A correlated noise model is first proposed in order to represent the discrepancy between the computational model and the data. This step is necessary, as a classical independent Gaussian noise is shown to be unsuitable for modeling the error, and to prevent convergence of the Markov Chain Monte Carlo sampler. Second, the polynomial chaos model is subsequently improved to handle the variability of the arrival time of the wave, using a preconditioned non-intrusive spectral method. Finally, the construction of a reduced model dedicated to Bayesian inference is proposed. Numerical results are presented and discussed.
Energy Technology Data Exchange (ETDEWEB)
Kostylev, Vladimir; Kostylev, Andrey; Carter, Chris; Mahoney, Chad; Pavlovski, Alexandre; Daye, Tony [Green Power Labs Inc., Dartmouth, NS (Canada); Cormier, Dallas Eugene; Fotland, Lena [San Diego Gas and Electric Co., San Diego, CA (United States)
2012-07-01
The marine atmospheric boundary layer is a layer or cool, moist maritime air with the thickness of a few thousand feet immediately below a temperature inversion. In coastal areas as moist air rises from the ocean surface, it becomes trapped and is often compressed into fog above which a layer of stratus clouds often forms. This phenomenon is common for satellite-based solar radiation monitoring and forecasting. Hour ahead satellite-based solar radiation forecasts are commonly using visible spectrum satellite images, from which it is difficult to automatically differentiate low stratus clouds and fog from high altitude clouds. This provides a challenge for cloud motion tyracking and cloud cover forecasting. San Diego Gas and Electric {sup registered} (SDG and E {sup registered}) Marine Layer Project was undertaken to obtain information for integration with PV forecasts, and to develop a detailed understanding of long-term benefits from forecasting Marine Layer (ML) events and their effects on PV production. In order to establish climatological ML patterns, spatial extent and distribution of marine layer, we analyzed visible and IR spectrum satellite images (GOES WEST) archive for the period of eleven years (2000 - 2010). Historical boundaries of marine layers impact were established based on the cross-classification of visible spectrum (VIS) and infrared (IR) images. This approach is successfully used by us and elsewhere for evaluating cloud albedo in common satellite-based techniques for solar radiation monitoring and forecasting. The approach allows differentiation of cloud cover and helps distinguish low laying fog which is the main consequence of marine layer formation. ML occurrence probability and maximum extent inland was established for each hour and day of the analyzed period and seasonal/patterns were described. SDG and E service area is the most affected region by ML events with highest extent and probability of ML occurrence. Influence of ML was the
Temporal variation of soil gas compositions for earthquake surveillance in Taiwan
International Nuclear Information System (INIS)
Walia, Vivek; Yang, Tsanyao Frank; Lin, Shih-Jung; Kumar, Arvind; Fu, Ching-Chou; Chiu, Jun-Ming; Chang, Hsaio-Hsien; Wen, Kuo-Liang; Chen, Cheng-Hong
2013-01-01
The present study is proposed to investigate temporal variations of soil–gas composition in the vicinity of different fault zones in Taiwan. To carry out the investigations, variations of soil–gases compositions were measured at continuous earthquake monitoring stations along Hsincheng and Hsinhua faults in Hsinchu and Tainan areas, respectively. Before selecting a monitoring site, the occurrence of deeper gas emanation was investigated by the soil–gas surveys and followed by continuous monitoring of some selected sites with respect to tectonic activity to check the sensitivity of the sites. Based on the results of long term geochemical monitoring at the established monitoring stations we can divide the studied area in two different tectonic zones. We proposed tectonic based model for earthquake forecasting in Taiwan and tested it for some big earthquakes occurred during observation period i.e. 2009–2010. Based on the anomalous signatures from particular monitoring stations we are in a state to identify the area for impending earthquakes of magnitude ≥5 and we have tested it for some earthquakes which rocked the country during that period. It can be concluded from above results that the stress/strain transmission for a particular earthquake is hindered by different tectonic settings of the region under study. - Highlights: ► Variations of soil–gases composition is studied at two different faults of Taiwan. ► Tectonic based model for earthquake forecasting in Taiwan was proposed and tested. ► Selection criteria to identify threshold earthquakes have been defined. ► Stress/strain transmission for earthquake may be hindered by tectonic settings
Directory of Open Access Journals (Sweden)
Amir Hakimhashemi
2010-11-01
Full Text Available We apply here a forecasting model to the Italian region for the spatio-temporal distribution of seismicity based on a smoothing Kernel function, Coulomb stress variations, and a rate-and-state friction law. We tested the feasibility of this approach, and analyzed the importance of introducing time-dependency in forecasting future events. The change in seismicity rate as a function of time was estimated by calculating the Coulomb stress change imparted by large earthquakes. We applied our approach to the region of Italy, and used all of the cataloged earthquakes that occurred up to 2006 to generate the reference seismicity rate. For calculation of the time-dependent seismicity rate changes, we estimated the rate-and-state stress transfer imparted by all of the ML≥4.0 earthquakes that occurred during 2007 and 2008. To validate the results, we first compared the reference seismicity rate with the distribution of ML≥1.8 earthquakes since 2007, using both a non-declustered and a declustered catalog. A positive correlation was found, and all of the forecast earthquakes had locations within 82% and 87% of the study area with the highest seismicity rate, respectively. Furthermore, 95% of the forecast earthquakes had locations within 27% and 47% of the study area with the highest seismicity rate, respectively. For the time-dependent seismicity rate changes, the number of events with locations in the regions with a seismicity rate increase was 11% more than in the regions with a seismicity rate decrease.
Barodka, Siarhei; Kliutko, Yauhenia; Krasouski, Alexander; Papko, Iryna; Svetashev, Alexander; Turishev, Leonid
2013-04-01
Nowadays numerical simulation of thundercloud formation processes is of great interest as an actual problem from the practical point of view. Thunderclouds significantly affect airplane flights, and mesoscale weather forecast has much to contribute to facilitate the aviation forecast procedures. An accurate forecast can certainly help to avoid aviation accidents due to weather conditions. The present study focuses on modelling of the convective clouds development and thunder clouds detection on the basis of mesoscale atmospheric processes simulation, aiming at significantly improving the aeronautical forecast. In the analysis, the primary weather radar information has been used to be further adapted for mesoscale forecast systems. Two types of domains have been selected for modelling: an internal one (with radius of 8 km), and an external one (with radius of 300 km). The internal domain has been directly applied to study the local clouds development, and the external domain data has been treated as initial and final conditions for cloud cover formation. The domain height has been chosen according to the civil aviation forecast data (i.e. not exceeding 14 km). Simulations of weather conditions and local clouds development have been made within selected domains with the WRF modelling system. In several cases, thunderclouds are detected within the convective clouds. To specify the given category of clouds, we employ a simulation technique of solid phase formation processes in the atmosphere. Based on modelling results, we construct vertical profiles indicating the amount of solid phase in the atmosphere. Furthermore, we obtain profiles demonstrating the amount of ice particles and large particles (hailstones). While simulating the processes of solid phase formation, we investigate vertical and horizontal air flows. Consequently, we attempt to separate the total amount of solid phase into categories of small ice particles, large ice particles and hailstones. Also, we
Directory of Open Access Journals (Sweden)
Hong-Juan Li
2013-04-01
Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.
Flood forecasting and uncertainty of precipitation forecasts
International Nuclear Information System (INIS)
Kobold, Mira; Suselj, Kay
2004-01-01
The timely and accurate flood forecasting is essential for the reliable flood warning. The effectiveness of flood warning is dependent on the forecast accuracy of certain physical parameters, such as the peak magnitude of the flood, its timing, location and duration. The conceptual rainfall - runoff models enable the estimation of these parameters and lead to useful operational forecasts. The accurate rainfall is the most important input into hydrological models. The input for the rainfall can be real time rain-gauges data, or weather radar data, or meteorological forecasted precipitation. The torrential nature of streams and fast runoff are characteristic for the most of the Slovenian rivers. Extensive damage is caused almost every year- by rainstorms affecting different regions of Slovenia' The lag time between rainfall and runoff is very short for Slovenian territory and on-line data are used only for now casting. Forecasted precipitations are necessary for hydrological forecast for some days ahead. ECMWF (European Centre for Medium-Range Weather Forecasts) gives general forecast for several days ahead while more detailed precipitation data with limited area ALADIN/Sl model are available for two days ahead. There is a certain degree of uncertainty using such precipitation forecasts based on meteorological models. The variability of precipitation is very high in Slovenia and the uncertainty of ECMWF predicted precipitation is very large for Slovenian territory. ECMWF model can predict precipitation events correctly, but underestimates amount of precipitation in general The average underestimation is about 60% for Slovenian region. The predictions of limited area ALADIN/Si model up to; 48 hours ahead show greater applicability in hydrological forecasting. The hydrological models are sensitive to precipitation input. The deviation of runoff is much bigger than the rainfall deviation. Runoff to rainfall error fraction is about 1.6. If spatial and time distribution
Blink Number Forecasting Based on Improved Bayesian Fusion Algorithm for Fatigue Driving Detection
Directory of Open Access Journals (Sweden)
Wei Sun
2015-01-01
Full Text Available An improved Bayesian fusion algorithm (BFA is proposed for forecasting the blink number in a continuous video. It assumes that, at one prediction interval, the blink number is correlated with the blink numbers of only a few previous intervals. With this assumption, the weights of the component predictors in the improved BFA are calculated according to their prediction performance only from a few intervals rather than from all intervals. Therefore, compared with the conventional BFA, the improved BFA is more sensitive to the disturbed condition of the component predictors for adjusting their weights more rapidly. To determine the most relevant intervals, the grey relation entropy-based analysis (GREBA method is proposed, which can be used analyze the relevancy between the historical data flows of blink number and the data flow at the current interval. Three single predictors, that is, the autoregressive integrated moving average (ARIMA, radial basis function neural network (RBFNN, and Kalman filter (KF, are designed and incorporated linearly into the BFA. Experimental results demonstrate that the improved BFA obviously outperforms the conventional BFA in both accuracy and stability; also fatigue driving can be accurately warned against in advance based on the blink number forecasted by the improved BFA.
Ionospheric forecasting model using fuzzy logic-based gradient descent method
Directory of Open Access Journals (Sweden)
D. Venkata Ratnam
2017-09-01
Full Text Available Space weather phenomena cause satellite to ground or satellite to aircraft transmission outages over the VHF to L-band frequency range, particularly in the low latitude region. Global Positioning System (GPS is primarily susceptible to this form of space weather. Faulty GPS signals are attributed to ionospheric error, which is a function of Total Electron Content (TEC. Importantly, precise forecasts of space weather conditions and appropriate hazard observant cautions required for ionospheric space weather observations are limited. In this paper, a fuzzy logic-based gradient descent method has been proposed to forecast the ionospheric TEC values. In this technique, membership functions have been tuned based on the gradient descent estimated values. The proposed algorithm has been tested with the TEC data of two geomagnetic storms in the low latitude station of KL University, Guntur, India (16.44°N, 80.62°E. It has been found that the gradient descent method performs well and the predicted TEC values are close to the original TEC measurements.
A new solar power output prediction based on hybrid forecast engine and decomposition model.
Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando
2018-06-12
Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo
2015-05-01
Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Li, Yanting; He, Yong; Su, Yan; Shu, Lianjie
2016-01-01
Highlights: • Suggests a nonparametric model based on MARS for output power prediction. • Compare the MARS model with a wide variety of prediction models. • Show that the MARS model is able to provide an overall good performance in both the training and testing stages. - Abstract: Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1 kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.
Directory of Open Access Journals (Sweden)
AHMER ALI
2014-10-01
Full Text Available The probabilistic seismic performance of a standard Korean nuclear power plant (NPP with an idealized isolation is investigated in the present work. A probabilistic seismic hazard analysis (PSHA of the Wolsong site on the Korean peninsula is performed by considering peak ground acceleration (PGA as an earthquake intensity measure. A procedure is reported on the categorization and selection of two sets of ground motions of the Tohoku earthquake, i.e. long-period and common as Set A and Set B respectively, for the nonlinear time history response analysis of the base-isolated NPP. Limit state values as multiples of the displacement responses of the NPP base isolation are considered for the fragility estimation. The seismic risk of the NPP is further assessed by incorporation of the rate of frequency exceedance and conditional failure probability curves. Furthermore, this framework attempts to show the unacceptable performance of the isolated NPP in terms of the probabilistic distribution and annual probability of limit states. The comparative results for long and common ground motions are discussed to contribute to the future safety of nuclear facilities against drastic events like Tohoku.
Energy Technology Data Exchange (ETDEWEB)
Ali, Ahmer; Hayah, Nadin Abu; Kim, Doo Kie [Dept. of Civil and Environmental Engineering, Kunsan National University, Kunsan (Korea, Republic of); Cho, Sung Gook [R and D Center, JACE KOREA Company, Gyeonggido (Korea, Republic of)
2014-10-15
The probabilistic seismic performance of a standard Korean nuclear power plant (NPP) with an idealized isolation is investigated in the present work. A probabilistic seismic hazard analysis (PSHA) of the Wolsong site on the Korean peninsula is performed by considering peak ground acceleration (PGA) as an earthquake intensity measure. A procedure is reported on the categorization and selection of two sets of ground motions of the Tohoku earthquake, i.e. long-period and common as Set A and Set B respectively, for the nonlinear time history response analysis of the base-isolated NPP. Limit state values as multiples of the displacement responses of the NPP base isolation are considered for the fragility estimation. The seismic risk of the NPP is further assessed by incorporation of the rate of frequency exceedance and conditional failure probability curves. Furthermore, this framework attempts to show the unacceptable performance of the isolated NPP in terms of the probabilistic distribution and annual probability of limit states. The comparative results for long and common ground motions are discussed to contribute to the future safety of nuclear facilities against drastic events like Tohoku.
Skill prediction of local weather forecasts based on the ECMWF ensemble
Directory of Open Access Journals (Sweden)
C. Ziehmann
2001-01-01
Full Text Available Ensemble Prediction has become an essential part of numerical weather forecasting. In this paper we investigate the ability of ensemble forecasts to provide an a priori estimate of the expected forecast skill. Several quantities derived from the local ensemble distribution are investigated for a two year data set of European Centre for Medium-Range Weather Forecasts (ECMWF temperature and wind speed ensemble forecasts at 30 German stations. The results indicate that the population of the ensemble mode provides useful information for the uncertainty in temperature forecasts. The ensemble entropy is a similar good measure. This is not true for the spread if it is simply calculated as the variance of the ensemble members with respect to the ensemble mean. The number of clusters in the C regions is almost unrelated to the local skill. For wind forecasts, the results are less promising.
QuakeUp: An advanced tool for a network-based Earthquake Early Warning system
Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo
2017-04-01
The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and
A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting
Directory of Open Access Journals (Sweden)
Chaolong Jia
2015-01-01
Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.
Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke
2016-05-10
We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.
Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke
2016-01-01
We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence i