WorldWideScience

Sample records for market time series

  1. What marketing scholars should know about time series analysis : time series applications in marketing

    NARCIS (Netherlands)

    Horváth, Csilla; Kornelis, Marcel; Leeflang, Peter S.H.

    2002-01-01

    In this review, we give a comprehensive summary of time series techniques in marketing, and discuss a variety of time series analysis (TSA) techniques and models. We classify them in the sets (i) univariate TSA, (ii) multivariate TSA, and (iii) multiple TSA. We provide relevant marketing

  2. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  3. Time series momentum and contrarian effects in the Chinese stock market

    Science.gov (United States)

    Shi, Huai-Long; Zhou, Wei-Xing

    2017-10-01

    This paper concentrates on the time series momentum or contrarian effects in the Chinese stock market. We evaluate the performance of the time series momentum strategy applied to major stock indices in mainland China and explore the relation between the performance of time series momentum strategies and some firm-specific characteristics. Our findings indicate that there is a time series momentum effect in the short run and a contrarian effect in the long run in the Chinese stock market. The performances of the time series momentum and contrarian strategies are highly dependent on the look-back and holding periods and firm-specific characteristics.

  4. Time series analysis of the developed financial markets' integration using visibility graphs

    Science.gov (United States)

    Zhuang, Enyu; Small, Michael; Feng, Gang

    2014-09-01

    A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.

  5. A Time Series Analysis to Asymmetric Marketing Competition Within a Market Structure

    OpenAIRE

    Francisco F. R. Ramos

    1996-01-01

    As a complementary to the existing studies of competitive market structure analysis, the present paper proposed a time series methodology to provide a more detailed picture of marketing competition in relation to competitive market structure. Two major hypotheses were tested as part of this project. First, it was found that some significant cross- lead and lag effects of marketing variables on sales between brands existed even between differents submarkets. second, it was found that high qual...

  6. Two-fractal overlap time series: Earthquakes and market crashes

    Indian Academy of Sciences (India)

    velocity over the other and time series of stock prices. An anticipation method for some of the crashes have been proposed here, based on these observations. Keywords. Cantor set; time series; earthquake; market crash. PACS Nos 05.00; 02.50.-r; 64.60; 89.65.Gh; 95.75.Wx. 1. Introduction. Capturing dynamical patterns of ...

  7. Arbitrage, market definition and monitoring a time series approach

    OpenAIRE

    Burke, S; Hunter, J

    2012-01-01

    This article considers the application to regional price data of time series methods to test stationarity, multivariate cointegration and exogeneity. The discovery of stationary price differentials in a bivariate setting implies that the series are rendered stationary by capturing a common trend and we observe through this mechanism long-run arbitrage. This is indicative of a broader market definition and efficiency. The problem is considered in relation to more than 700 weekly data points on...

  8. Analysis of cyclical behavior in time series of stock market returns

    Science.gov (United States)

    Stratimirović, Djordje; Sarvan, Darko; Miljković, Vladimir; Blesić, Suzana

    2018-01-01

    In this paper we have analyzed scaling properties and cyclical behavior of the three types of stock market indexes (SMI) time series: data belonging to stock markets of developed economies, emerging economies, and of the underdeveloped or transitional economies. We have used two techniques of data analysis to obtain and verify our findings: the wavelet transform (WT) spectral analysis to identify cycles in the SMI returns data, and the time-dependent detrended moving average (tdDMA) analysis to investigate local behavior around market cycles and trends. We found cyclical behavior in all SMI data sets that we have analyzed. Moreover, the positions and the boundaries of cyclical intervals that we found seam to be common for all markets in our dataset. We list and illustrate the presence of nine such periods in our SMI data. We report on the possibilities to differentiate between the level of growth of the analyzed markets by way of statistical analysis of the properties of wavelet spectra that characterize particular peak behaviors. Our results show that measures like the relative WT energy content and the relative WT amplitude of the peaks in the small scales region could be used to partially differentiate between market economies. Finally, we propose a way to quantify the level of development of a stock market based on estimation of local complexity of market's SMI series. From the local scaling exponents calculated for our nine peak regions we have defined what we named the Development Index, which proved, at least in the case of our dataset, to be suitable to rank the SMI series that we have analyzed in three distinct groups.

  9. A hybrid approach EMD-HW for short-term forecasting of daily stock market time series data

    Science.gov (United States)

    Awajan, Ahmad Mohd; Ismail, Mohd Tahir

    2017-08-01

    Recently, forecasting time series has attracted considerable attention in the field of analyzing financial time series data, specifically within the stock market index. Moreover, stock market forecasting is a challenging area of financial time-series forecasting. In this study, a hybrid methodology between Empirical Mode Decomposition with the Holt-Winter method (EMD-HW) is used to improve forecasting performances in financial time series. The strength of this EMD-HW lies in its ability to forecast non-stationary and non-linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy and offers a new forecasting method in time series. The daily stock market time series data of 11 countries is applied to show the forecasting performance of the proposed EMD-HW. Based on the three forecast accuracy measures, the results indicate that EMD-HW forecasting performance is superior to traditional Holt-Winter forecasting method.

  10. TIME SERIES ANALYSIS ON STOCK MARKET FOR TEXT MINING CORRELATION OF ECONOMY NEWS

    Directory of Open Access Journals (Sweden)

    Sadi Evren SEKER

    2014-01-01

    Full Text Available This paper proposes an information retrieval methodfor the economy news. Theeffect of economy news, are researched in the wordlevel and stock market valuesare considered as the ground proof.The correlation between stock market prices and economy news is an already ad-dressed problem for most of the countries. The mostwell-known approach is ap-plying the text mining approaches to the news and some time series analysis tech-niques over stock market closing values in order toapply classification or cluster-ing algorithms over the features extracted. This study goes further and tries to askthe question what are the available time series analysis techniques for the stockmarket closing values and which one is the most suitable? In this study, the newsand their dates are collected into a database and text mining is applied over thenews, the text mining part has been kept simple with only term frequency – in-verse document frequency method. For the time series analysis part, we havestudied 10 different methods such as random walk, moving average, acceleration,Bollinger band, price rate of change, periodic average, difference, momentum orrelative strength index and their variation. In this study we have also explainedthese techniques in a comparative way and we have applied the methods overTurkish Stock Market closing values for more than a2 year period. On the otherhand, we have applied the term frequency – inversedocument frequency methodon the economy news of one of the high-circulatingnewspapers in Turkey.

  11. Predicting the Market Potential Using Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Halmet Bradosti

    2015-12-01

    Full Text Available The aim of this analysis is to forecast a mini-market sales volume for the period of twelve months starting August 2015 to August 2016. The study is based on the monthly sales in Iraqi Dinar for a private local mini-market for the month of April 2014 to July 2015. As revealed on the graph and of course if the stagnant economic condition continues, the trend of future sales is down-warding. Based on time series analysis, the business may continue to operate and generate small revenues until August 2016. However, due to low sales volume, low profit margin and operating expenses, the revenues may not be adequate enough to produce positive net income and the business may not be able to operate afterward. The principal question rose from this is the forecasting sales in the region will be difficult where the business cycle so dynamic and revolutionary due to systematic risks and unforeseeable future.

  12. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    OpenAIRE

    Yanhui Xi; Hui Peng; Yemei Qin

    2016-01-01

    The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation....

  13. High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong

    2008-02-01

    Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.

  14. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    Directory of Open Access Journals (Sweden)

    Yanhui Xi

    2016-01-01

    Full Text Available The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation. With the new representations, a theoretical explanation of leverage effect is provided. Simulated data and daily stock market indices (Shanghai composite index, Shenzhen component index, and Standard and Poor’s 500 Composite index via Bayesian Markov Chain Monte Carlo (MCMC method are used to estimate the leverage market microstructure model. The results verify the effectiveness of the model and its estimation approach proposed in the paper and also indicate that the stock markets have strong leverage effects. Compared with the classical leverage stochastic volatility (SV model in terms of DIC (Deviance Information Criterion, the leverage market microstructure model fits the data better.

  15. Time Series Momentum

    DEFF Research Database (Denmark)

    Moskowitz, Tobias J.; Ooi, Yao Hua; Heje Pedersen, Lasse

    2012-01-01

    We document significant “time series momentum” in equity index, currency, commodity, and bond futures for each of the 58 liquid instruments we consider. We find persistence in returns for one to 12 months that partially reverses over longer horizons, consistent with sentiment theories of initial...... under-reaction and delayed over-reaction. A diversified portfolio of time series momentum strategies across all asset classes delivers substantial abnormal returns with little exposure to standard asset pricing factors and performs best during extreme markets. Examining the trading activities...

  16. Time series regression-based pairs trading in the Korean equities market

    Science.gov (United States)

    Kim, Saejoon; Heo, Jun

    2017-07-01

    Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.

  17. R - evolution in Time Series Analysis Software Applied on R - omanian Capital Market

    Directory of Open Access Journals (Sweden)

    Ciprian ALEXANDRU

    2014-06-01

    Full Text Available Worldwide and during the last decade, R has developed in a balanced way and nowadays it represents the most powerful tool for computational statistics, data science and visualization. Millions of data scientists use R to face their most challenging problems in topics ranging from economics to engineering and genetics. In this study, R was used to compute data on stock market prices in order to build trading models and to estimate the evolution of the quantitative financial market. These models were already applied on the international capital markets. In Romania, the quantitative modeling of capital market is available only for clients of trading brokers because the time series data are collected for the commercial purpose; in that circumstance, the statistical computing tools meet the inertia to change. This paper aims to expose a small part of the capability of R to use mix-and-match models and cutting-edge methods in statistics and quantitative modeling in order to build an alternative way to analyze capital market in Romania over the commercial threshold.

  18. Empirical method to measure stochasticity and multifractality in nonlinear time series

    Science.gov (United States)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  19. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  20. The string prediction models as an invariants of time series in forex market

    OpenAIRE

    Richard Pincak; Marian Repasan

    2011-01-01

    In this paper we apply a new approach of the string theory to the real financial market. It is direct extension and application of the work [1] into prediction of prices. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. Brief overview of the results and analysis is given. The first model is ...

  1. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    Science.gov (United States)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  2. Visibility Graph Based Time Series Analysis.

    Science.gov (United States)

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  3. Visibility Graph Based Time Series Analysis.

    Directory of Open Access Journals (Sweden)

    Mutua Stephen

    Full Text Available Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  4. Wavelet transform approach for fitting financial time series data

    Science.gov (United States)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  5. Analysis of Heavy-Tailed Time Series

    DEFF Research Database (Denmark)

    Xie, Xiaolei

    This thesis is about analysis of heavy-tailed time series. We discuss tail properties of real-world equity return series and investigate the possibility that a single tail index is shared by all return series of actively traded equities in a market. Conditions for this hypothesis to be true...... are identified. We study the eigenvalues and eigenvectors of sample covariance and sample auto-covariance matrices of multivariate heavy-tailed time series, and particularly for time series with very high dimensions. Asymptotic approximations of the eigenvalues and eigenvectors of such matrices are found...... and expressed in terms of the parameters of the dependence structure, among others. Furthermore, we study an importance sampling method for estimating rare-event probabilities of multivariate heavy-tailed time series generated by matrix recursion. We show that the proposed algorithm is efficient in the sense...

  6. Non-linear forecasting in high-frequency financial time series

    Science.gov (United States)

    Strozzi, F.; Zaldívar, J. M.

    2005-08-01

    A new methodology based on state space reconstruction techniques has been developed for trading in financial markets. The methodology has been tested using 18 high-frequency foreign exchange time series. The results are in apparent contradiction with the efficient market hypothesis which states that no profitable information about future movements can be obtained by studying the past prices series. In our (off-line) analysis positive gain may be obtained in all those series. The trading methodology is quite general and may be adapted to other financial time series. Finally, the steps for its on-line application are discussed.

  7. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  8. The string prediction models as invariants of time series in the forex market

    Science.gov (United States)

    Pincak, R.

    2013-12-01

    In this paper we apply a new approach of string theory to the real financial market. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. A brief overview of the results and analysis is given. The first model is based on the correlation function as invariant and the second one is an application based on the deviations from the closed string/pattern form (PMBCS). We found the difference between these two approaches. The first model cannot predict the behavior of the forex market with good efficiency in comparison with the second one which is, in addition, able to make relevant profit per year. The presented string models could be useful for portfolio creation and financial risk management in the banking sector as well as for a nonlinear statistical approach to data optimization.

  9. Refined composite multiscale weighted-permutation entropy of financial time series

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  10. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  11. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  12. Anomaly on Superspace of Time Series Data

    Science.gov (United States)

    Capozziello, Salvatore; Pincak, Richard; Kanjamapornkul, Kabin

    2017-11-01

    We apply the G-theory and anomaly of ghost and antighost fields in the theory of supersymmetry to study a superspace over time series data for the detection of hidden general supply and demand equilibrium in the financial market. We provide proof of the existence of a general equilibrium point over 14 extradimensions of the new G-theory compared with the M-theory of the 11 dimensions model of Edward Witten. We found that the process of coupling between nonequilibrium and equilibrium spinor fields of expectation ghost fields in the superspace of time series data induces an infinitely long exact sequence of cohomology from a short exact sequence of moduli state space model. If we assume that the financial market is separated into two topological spaces of supply and demand as the D-brane and anti-D-brane model, then we can use a cohomology group to compute the stability of the market as a stable point of the general equilibrium of the interaction between D-branes of the market. We obtain the result that the general equilibrium will exist if and only if the 14th Batalin-Vilkovisky cohomology group with the negative dimensions underlying 14 major hidden factors influencing the market is zero.

  13. Using time series structural characteristics to analyze grain prices in food insecure countries

    Science.gov (United States)

    Davenport, Frank; Funk, Chris

    2015-01-01

    Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.

  14. The detection of local irreversibility in time series based on segmentation

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-06-01

    We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.

  15. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  16. Time Series Analysis of Wheat Futures Reward in China

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Different from the fact that the main researches are focused on single futures contract and lack of the comparison of different periods, this paper described the statistical characteristics of wheat futures reward time series of Zhengzhou Commodity Exchange in recent three years. Besides the basic statistic analysis, the paper used the GARCH and EGARCH model to describe the time series which had the ARCH effect and analyzed the persistence of volatility shocks and the leverage effect. The results showed that compared with that of normal one,wheat futures reward series were abnormality, leptokurtic and thick tail distribution. The study also found that two-part of the reward series had no autocorrelation. Among the six correlative series, three ones presented the ARCH effect. By using of the Auto-regressive Distributed Lag Model, GARCH model and EGARCH model, the paper demonstrates the persistence of volatility shocks and the leverage effect on the wheat futures reward time series. The results reveal that on the one hand, the statistical characteristics of the wheat futures reward are similar to the aboard mature futures market as a whole. But on the other hand, the results reflect some shortages such as the immatureness and the over-control by the government in the Chinese future market.

  17. Characteristics of the transmission of autoregressive sub-patterns in financial time series

    Science.gov (United States)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong

    2014-09-01

    There are many types of autoregressive patterns in financial time series, and they form a transmission process. Here, we define autoregressive patterns quantitatively through an econometrical regression model. We present a computational algorithm that sets the autoregressive patterns as nodes and transmissions between patterns as edges, and then converts the transmission process of autoregressive patterns in a time series into a network. We utilised daily Shanghai (securities) composite index time series to study the transmission characteristics of autoregressive patterns. We found statistically significant evidence that the financial market is not random and that there are similar characteristics between parts and whole time series. A few types of autoregressive sub-patterns and transmission patterns drive the oscillations of the financial market. A clustering effect on fluctuations appears in the transmission process, and certain non-major autoregressive sub-patterns have high media capabilities in the financial time series. Different stock indexes exhibit similar characteristics in the transmission of fluctuation information. This work not only proposes a distinctive perspective for analysing financial time series but also provides important information for investors.

  18. Econophysics — complex correlations and trend switchings in financial time series

    Science.gov (United States)

    Preis, T.

    2011-03-01

    This article focuses on the analysis of financial time series and their correlations. A method is used for quantifying pattern based correlations of a time series. With this methodology, evidence is found that typical behavioral patterns of financial market participants manifest over short time scales, i.e., that reactions to given price patterns are not entirely random, but that similar price patterns also cause similar reactions. Based on the investigation of the complex correlations in financial time series, the question arises, which properties change when switching from a positive trend to a negative trend. An empirical quantification by rescaling provides the result that new price extrema coincide with a significant increase in transaction volume and a significant decrease in the length of corresponding time intervals between transactions. These findings are independent of the time scale over 9 orders of magnitude, and they exhibit characteristics which one can also find in other complex systems in nature (and in physical systems in particular). These properties are independent of the markets analyzed. Trends that exist only for a few seconds show the same characteristics as trends on time scales of several months. Thus, it is possible to study financial bubbles and their collapses in more detail, because trend switching processes occur with higher frequency on small time scales. In addition, a Monte Carlo based simulation of financial markets is analyzed and extended in order to reproduce empirical features and to gain insight into their causes. These causes include both financial market microstructure and the risk aversion of market participants.

  19. DIY Solar Market Analysis Webinar Series: Solar Resource and Technical

    Science.gov (United States)

    Series: Solar Resource and Technical Potential DIY Solar Market Analysis Webinar Series: Solar Resource and Technical Potential Wednesday, June 11, 2014 As part of a Do-It-Yourself Solar Market Analysis Potential | State, Local, and Tribal Governments | NREL DIY Solar Market Analysis Webinar

  20. Quantifying complexity of financial short-term time series by composite multiscale entropy measure

    Science.gov (United States)

    Niu, Hongli; Wang, Jun

    2015-05-01

    It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  1. Synthetic river flow time series generator for dispatch and spot price forecast

    International Nuclear Information System (INIS)

    Flores, R.A.

    2007-01-01

    Decision-making in electricity markets is complicated by uncertainties in demand growth, power supplies and fuel prices. In Peru, where the electrical power system is highly dependent on water resources at dams and river flows, hydrological uncertainties play a primary role in planning, price and dispatch forecast. This paper proposed a signal processing method for generating new synthetic river flow time series as a support for planning and spot market price forecasting. River flow time series are natural phenomena representing a continuous-time domain process. As an alternative synthetic representation of the original river flow time series, this proposed signal processing method preserves correlations, basic statistics and seasonality. It takes into account deterministic, periodic and non periodic components such as those due to the El Nino Southern Oscillation phenomenon. The new synthetic time series has many correlations with the original river flow time series, rendering it suitable for possible replacement of the classical method of sorting historical river flow time series. As a dispatch and planning approach to spot pricing, the proposed method offers higher accuracy modeling by decomposing the signal into deterministic, periodic, non periodic and stochastic sub signals. 4 refs., 4 tabs., 13 figs

  2. Estimation of Hurst Exponent for the Financial Time Series

    Science.gov (United States)

    Kumar, J.; Manchanda, P.

    2009-07-01

    Till recently statistical methods and Fourier analysis were employed to study fluctuations in stock markets in general and Indian stock market in particular. However current trend is to apply the concepts of wavelet methodology and Hurst exponent, see for example the work of Manchanda, J. Kumar and Siddiqi, Journal of the Frankline Institute 144 (2007), 613-636 and paper of Cajueiro and B. M. Tabak. Cajueiro and Tabak, Physica A, 2003, have checked the efficiency of emerging markets by computing Hurst component over a time window of 4 years of data. Our goal in the present paper is to understand the dynamics of the Indian stock market. We look for the persistency in the stock market through Hurst exponent and fractal dimension of time series data of BSE 100 and NIFTY 50.

  3. Quantifying the behavior of price dynamics at opening time in stock market

    Science.gov (United States)

    Ochiai, Tomoshiro; Takada, Hideyuki; Nacher, Jose C.

    2014-11-01

    The availability of huge volume of financial data has offered the possibility for understanding the markets as a complex system characterized by several stylized facts. Here we first show that the time evolution of the Japan’s Nikkei stock average index (Nikkei 225) futures follows the resistance and breaking-acceleration effects when the complete time series data is analyzed. However, in stock markets there are periods where no regular trades occur between the close of the market on one day and the next day’s open. To examine these time gaps we decompose the time series data into opening time and intermediate time. Our analysis indicates that for the intermediate time, both the resistance and the breaking-acceleration effects are still observed. However, for the opening time there are almost no resistance and breaking-acceleration effects, and volatility is always constantly high. These findings highlight unique dynamic differences between stock markets and forex market and suggest that current risk management strategies may need to be revised to address the absence of these dynamic effects at the opening time.

  4. Multiresolution analysis of Bursa Malaysia KLCI time series

    Science.gov (United States)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  5. Minimum entropy density method for the time series analysis

    Science.gov (United States)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  6. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  7. Tropical Forest Monitoring in Southeast Asia Using Remotely Sensed Optical Time Series

    DEFF Research Database (Denmark)

    Grogan, Kenneth Joseph

    of forest cover using satellite remote sensing technology. Recently, there has been a shift in data protection policy where rich archives of satellite imagery are now freely available. This has spurred a new era in satellite-based forest monitoring leading to advancements in optical time series processing...... markets. At the Landsat 30-m resolution, annual time series coupled with linear segmentation using LandTrendr was found to be an effective approach for monitoring forest disturbance, with moderate to high accuracies, depending on forest type. At the MODIS 250-m resolution, intra-annual time series...... global rubber markets can be linked to forest cover change, the effects of land policy in Cambodia, and beyond, have also had a major influence. It remains to be seen if intervention initiatives such as REDD+ can materialise over the coming years to make a meaningful contribution to tropical forest...

  8. Topological data analysis of financial time series: Landscapes of crashes

    Science.gov (United States)

    Gidea, Marian; Katz, Yuri

    2018-02-01

    We explore the evolution of daily returns of four major US stock market indices during the technology crash of 2000, and the financial crisis of 2007-2009. Our methodology is based on topological data analysis (TDA). We use persistence homology to detect and quantify topological patterns that appear in multidimensional time series. Using a sliding window, we extract time-dependent point cloud data sets, to which we associate a topological space. We detect transient loops that appear in this space, and we measure their persistence. This is encoded in real-valued functions referred to as a 'persistence landscapes'. We quantify the temporal changes in persistence landscapes via their Lp-norms. We test this procedure on multidimensional time series generated by various non-linear and non-equilibrium models. We find that, in the vicinity of financial meltdowns, the Lp-norms exhibit strong growth prior to the primary peak, which ascends during a crash. Remarkably, the average spectral density at low frequencies of the time series of Lp-norms of the persistence landscapes demonstrates a strong rising trend for 250 trading days prior to either dotcom crash on 03/10/2000, or to the Lehman bankruptcy on 09/15/2008. Our study suggests that TDA provides a new type of econometric analysis, which complements the standard statistical measures. The method can be used to detect early warning signals of imminent market crashes. We believe that this approach can be used beyond the analysis of financial time series presented here.

  9. Conditional CAPM: Time-varying Betas in the Brazilian Market

    Directory of Open Access Journals (Sweden)

    Frances Fischberg Blank

    2014-10-01

    Full Text Available The conditional CAPM is characterized by time-varying market beta. Based on state-space models approach, beta behavior can be modeled as a stochastic process dependent on conditioning variables related to business cycle and estimated using Kalman filter. This paper studies alternative models for portfolios sorted by size and book-to-market ratio in the Brazilian stock market and compares their adjustment to data. Asset pricing tests based on time-series and cross-sectional approaches are also implemented. A random walk process combined with conditioning variables is the preferred model, reducing pricing errors compared to unconditional CAPM, but the errors are still significant. Cross-sectional test show that book-to-market ratio becomes less relevant, but past returns still capture cross-section variation

  10. The grounds for time dependent market potentials from dealers' dynamics

    Science.gov (United States)

    Yamada, K.; Takayasu, H.; Takayasu, M.

    2008-06-01

    We apply the potential force estimation method to artificial time series of market price produced by a deterministic dealer model. We find that dealers’ feedback of linear prediction of market price based on the latest mean price changes plays the central role in the market’s potential force. When markets are dominated by dealers with positive feedback the resulting potential force is repulsive, while the effect of negative feedback enhances the attractive potential force.

  11. Volatility behavior of visibility graph EMD financial time series from Ising interacting system

    Science.gov (United States)

    Zhang, Bo; Wang, Jun; Fang, Wen

    2015-08-01

    A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.

  12. Nonlinear Fluctuation Behavior of Financial Time Series Model by Statistical Physics System

    Directory of Open Access Journals (Sweden)

    Wuyang Cheng

    2014-01-01

    Full Text Available We develop a random financial time series model of stock market by one of statistical physics systems, the stochastic contact interacting system. Contact process is a continuous time Markov process; one interpretation of this model is as a model for the spread of an infection, where the epidemic spreading mimics the interplay of local infections and recovery of individuals. From this financial model, we study the statistical behaviors of return time series, and the corresponding behaviors of returns for Shanghai Stock Exchange Composite Index (SSECI and Hang Seng Index (HSI are also comparatively studied. Further, we investigate the Zipf distribution and multifractal phenomenon of returns and price changes. Zipf analysis and MF-DFA analysis are applied to investigate the natures of fluctuations for the stock market.

  13. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    Science.gov (United States)

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  14. Effects of series compensation on spot price power markets

    International Nuclear Information System (INIS)

    Shrestha, G.B.; Wang Feng

    2005-01-01

    The operation of a deregulated power market becomes more complex as the generation scheduling is dependent on suppliers' and consumers' bids. With large number of transactions in the power market changing in time, it is more likely for some transmission lines to face congestion. Series compensation, such as TCSC, with its ability to directly control the power flow can be very helpful to improve the operation of transmission networks. The effects of TCSC on the operation of a spot price power market are studied in this paper using the modified IEEE 14-bus system. Optimal Power Flow incorporating TCSC is used to implement the spot price market. Linear bids are used to model suppliers' and consumers' bids. Issues of location and cost of TCSC are discussed. The effects of levels of TCSC compensation on wide range of system quantities are studied. The effects on the total social benefit, the spot prices, transmission congestion, total generation and consumption, benefit to individual supplier and consumer etc. are discussed. It is demonstrated that though use of TCSC makes the system more efficient and augments competition in the market, it is not easy to establish general relationships between the levels of compensation and various market quantities. Simulation studies like these can be used to assess the effects of TCSC in specific systems. (Author)

  15. Welfare States, Labor Markets, Political Dynamics, and Population Health: A Time-Series Cross-Sectional Analysis Among East and Southeast Asian Nations.

    Science.gov (United States)

    Ng, Edwin; Muntaner, Carles; Chung, Haejoo

    2016-04-01

    Recent scholarship offers different theories on how macrosocial determinants affect the population health of East and Southeast Asian nations. Dominant theories emphasize the effects of welfare regimes, welfare generosity, and labor market institutions. In this article, we conduct exploratory time-series cross-sectional analyses to generate new evidence on these theories while advancing a political explanation. Using unbalanced data of 7 East Asian countries and 11 Southeast Asian nations from 1960 to 2012, primary findings are 3-fold. First, welfare generosity measured as education and health spending has a positive impact on life expectancy, net of GDP. Second, life expectancy varies significantly by labor markets; however, these differences are explained by differences in welfare generosity. Third, as East and Southeast Asian countries become more democratic, welfare generosity increases, and population health improves. This study provides new evidence on the value of considering politics, welfare states, and labor markets within the same conceptual framework. © 2016 APJPH.

  16. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2016-01-01

    (ERNN, the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  17. Detecting macroeconomic phases in the Dow Jones Industrial Average time series

    Science.gov (United States)

    Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann

    2009-11-01

    In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.

  18. Detrended fluctuation analysis based on higher-order moments of financial time series

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-01-01

    In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.

  19. A comparison between MS-VECM and MS-VECMX on economic time series data

    Science.gov (United States)

    Phoong, Seuk-Wai; Ismail, Mohd Tahir; Sek, Siok-Kun

    2014-07-01

    Multivariate Markov switching models able to provide useful information on the study of structural change data since the regime switching model can analyze the time varying data and capture the mean and variance in the series of dependence structure. This paper will investigates the oil price and gold price effects on Malaysia, Singapore, Thailand and Indonesia stock market returns. Two forms of Multivariate Markov switching models are used namely the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model (MSMH-VECM) and the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model with exogenous variable (MSMH-VECMX). The reason for using these two models are to capture the transition probabilities of the data since real financial time series data always exhibit nonlinear properties such as regime switching, cointegrating relations, jumps or breaks passing the time. A comparison between these two models indicates that MSMH-VECM model able to fit the time series data better than the MSMH-VECMX model. In addition, it was found that oil price and gold price affected the stock market changes in the four selected countries.

  20. Multiscale multifractal multiproperty analysis of financial time series based on Rényi entropy

    Science.gov (United States)

    Yujun, Yang; Jianping, Li; Yimei, Yang

    This paper introduces a multiscale multifractal multiproperty analysis based on Rényi entropy (3MPAR) method to analyze short-range and long-range characteristics of financial time series, and then applies this method to the five time series of five properties in four stock indices. Combining the two analysis techniques of Rényi entropy and multifractal detrended fluctuation analysis (MFDFA), the 3MPAR method focuses on the curves of Rényi entropy and generalized Hurst exponent of five properties of four stock time series, which allows us to study more universal and subtle fluctuation characteristics of financial time series. By analyzing the curves of the Rényi entropy and the profiles of the logarithm distribution of MFDFA of five properties of four stock indices, the 3MPAR method shows some fluctuation characteristics of the financial time series and the stock markets. Then, it also shows a richer information of the financial time series by comparing the profile of five properties of four stock indices. In this paper, we not only focus on the multifractality of time series but also the fluctuation characteristics of the financial time series and subtle differences in the time series of different properties. We find that financial time series is far more complex than reported in some research works using one property of time series.

  1. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  2. GPS Position Time Series @ JPL

    Science.gov (United States)

    Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen

    2013-01-01

    Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis

  3. Multiscale synchrony behaviors of paired financial time series by 3D multi-continuum percolation

    Science.gov (United States)

    Wang, M.; Wang, J.; Wang, B. T.

    2018-02-01

    Multiscale synchrony behaviors and nonlinear dynamics of paired financial time series are investigated, in an attempt to study the cross correlation relationships between two stock markets. A random stock price model is developed by a new system called three-dimensional (3D) multi-continuum percolation system, which is utilized to imitate the formation mechanism of price dynamics and explain the nonlinear behaviors found in financial time series. We assume that the price fluctuations are caused by the spread of investment information. The cluster of 3D multi-continuum percolation represents the cluster of investors who share the same investment attitude. In this paper, we focus on the paired return series, the paired volatility series, and the paired intrinsic mode functions which are decomposed by empirical mode decomposition. A new cross recurrence quantification analysis is put forward, combining with multiscale cross-sample entropy, to investigate the multiscale synchrony of these paired series from the proposed model. The corresponding research is also carried out for two China stock markets as comparison.

  4. Time series analysis time series analysis methods and applications

    CERN Document Server

    Rao, Tata Subba; Rao, C R

    2012-01-01

    The field of statistics not only affects all areas of scientific activity, but also many other matters such as public policy. It is branching rapidly into so many different subjects that a series of handbooks is the only way of comprehensively presenting the various aspects of statistical methodology, applications, and recent developments. The Handbook of Statistics is a series of self-contained reference books. Each volume is devoted to a particular topic in statistics, with Volume 30 dealing with time series. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. Comprehensively presents the various aspects of statistical methodology Discusses a wide variety of diverse applications and recent developments Contributors are internationally renowened experts in their respect...

  5. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  6. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  7. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  8. Analysis of financial time series using multiscale entropy based on skewness and kurtosis

    Science.gov (United States)

    Xu, Meng; Shang, Pengjian

    2018-01-01

    There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.

  9. Time-clustering behavior of sharp fluctuation sequences in Chinese stock markets

    International Nuclear Information System (INIS)

    Yuan Ying; Zhuang Xintian; Liu Zhiying; Huang Weiqiang

    2012-01-01

    Sharp fluctuations (in particular, extreme fluctuations) of asset prices have a great impact on financial markets and risk management. Therefore, investigating the time dynamics of sharp fluctuation is a challenge in the financial fields. Using two different representations of the sharp fluctuations (inter-event times and series of counts), the time clustering behavior in the sharp fluctuation sequences of stock markets in China is studied with several statistical tools, including coefficient of variation, Allan Factor, Fano Factor as well as R/S (rescaled range) analysis. All of the empirical results indicate that the time dynamics of the sharp fluctuation sequences can be considered as a fractal process with a high degree of time-clusterization of the events. It can help us to get a better understanding of the nature and dynamics of sharp fluctuation of stock price in stock markets.

  10. Time series analysis of the behavior of brazilian natural rubber

    Directory of Open Access Journals (Sweden)

    Antônio Donizette de Oliveira

    2009-03-01

    Full Text Available The natural rubber is a non-wood product obtained of the coagulation of some lattices of forest species, being Hevea brasiliensis the main one. Native from the Amazon Region, this species was already known by the Indians before the discovery of America. The natural rubber became a product globally valued due to its multiple applications in the economy, being its almost perfect substitute the synthetic rubber derived from the petroleum. Similarly to what happens with other countless products the forecast of future prices of the natural rubber has been object of many studies. The use of models of forecast of univariate timeseries stands out as the more accurate and useful to reduce the uncertainty in the economic decision making process. This studyanalyzed the historical series of prices of the Brazilian natural rubber (R$/kg, in the Jan/99 - Jun/2006 period, in order tocharacterize the rubber price behavior in the domestic market; estimated a model for the time series of monthly natural rubberprices; and foresaw the domestic prices of the natural rubber, in the Jul/2006 - Jun/2007 period, based on the estimated models.The studied models were the ones belonging to the ARIMA family. The main results were: the domestic market of the natural rubberis expanding due to the growth of the world economy; among the adjusted models, the ARIMA (1,1,1 model provided the bestadjustment of the time series of prices of the natural rubber (R$/kg; the prognosis accomplished for the series supplied statistically adequate fittings.

  11. Characteristics of the co-fluctuation matrix transmission network based on financial multi-time series

    OpenAIRE

    Huajiao Li; Haizhong An; Xiangyun Gao; Wei Fang

    2015-01-01

    The co-fluctuation of two time series has often been studied by analysing the correlation coefficient over a selected period. However, in both domestic and global financial markets, there are more than two active time series that fluctuate constantly as a result of various factors, including geographic locations, information communications and so on. In addition to correlation relationships over longer periods, daily co-fluctuation relationships and their transmission features are also import...

  12. Relationship between crimes and economic conditions in Pakistan: a time series approach

    NARCIS (Netherlands)

    Raja, Mohsin Gulzar; Ullah, Kafait

    2013-01-01

    Using the time series data from 1990-2011, this paper is an attempt to explore the relationship between economic conditions and criminal activities in Pakistan. Three variables are being used for economic conditions like increasing female employment in labor market, CPI which denotes inflation and

  13. International Work-Conference on Time Series

    CERN Document Server

    Pomares, Héctor; Valenzuela, Olga

    2017-01-01

    This volume of selected and peer-reviewed contributions on the latest developments in time series analysis and forecasting updates the reader on topics such as analysis of irregularly sampled time series, multi-scale analysis of univariate and multivariate time series, linear and non-linear time series models, advanced time series forecasting methods, applications in time series analysis and forecasting, advanced methods and online learning in time series and high-dimensional and complex/big data time series. The contributions were originally presented at the International Work-Conference on Time Series, ITISE 2016, held in Granada, Spain, June 27-29, 2016. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting.  It focuses on interdisciplinary and multidisciplinary rese arch encompassing the disciplines of comput...

  14. Time Series Modeling of Army Mission Command Communication Networks: An Event-Driven Analysis

    Science.gov (United States)

    2013-06-01

    Lehmann, D. R. (1984). How advertising affects sales: Meta- analysis of econometric results. Journal of Marketing Research , 21, 65-74. Barabási, A. L...317-357. Leone, R. P. (1983). Modeling sales-advertising relationships: An integrated time series- econometric approach. Journal of Marketing ... Research , 20, 291-295. McGrath, J. E., & Kravitz, D. A. (1982). Group research. Annual Review of Psychology, 33, 195- 230. Monge, P. R., & Contractor

  15. Current Market Top Business Scopes Trend—A Concurrent Text and Time Series Active Learning Study of NASDAQ and NYSE Stocks from 2012 to 2017

    Directory of Open Access Journals (Sweden)

    Xiaoping Du

    2018-05-01

    Full Text Available As information technologies evolve, it has become necessary to examine the changes which have taken place in the top business scopes for both investors and entrepreneurs. To provide an understanding for the trends of the top business scopes in the current market, this article conducts a concurrent text and time series methodology to analyze the stocks in the New York Stock Exchange (NYSE and the National Association of Securities Dealers Automated Quotations (NASDAQ from 2012 to 2017. There is evidence that artificial intelligence and blockchains gained increasing importance for companies during that period. The authors contend that their findings in this paper question the status quo of promising business scopes for companies in the U.S. market.

  16. 77 FR 22282 - Milk in the Northeast and Other Marketing Areas; Determination of Equivalent Price Series

    Science.gov (United States)

    2012-04-13

    ... DEPARTMENT OF AGRICULTURE Agricultural Marketing Service [Doc. No. AMS-DA-10-0089; DA-11-01] Milk in the Northeast and Other Marketing Areas; Determination of Equivalent Price Series AGENCY: Agricultural Marketing Service, USDA. ACTION: Determination of equivalent price series. SUMMARY: It has been...

  17. From Networks to Time Series

    Science.gov (United States)

    Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi

    2012-10-01

    In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.

  18. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    Science.gov (United States)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  19. The influence of labor market changes on first-time medical school applicant pools.

    Science.gov (United States)

    Cort, David A; Morrison, Emory

    2014-12-01

    To explore whether the number and composition of first-time applicants to U.S. MD-granting medical schools, which have fluctuated over the past 30 years, are related to changes in labor market strength, specifically the unemployment rate and wages. The authors merged time series data from 1980 through 2010 (inclusive) from five sources and used multivariate time series models to determine whether changes in labor market strength (and several other macro-level factors) were related to the number of the medical school applicants as reported by the American Medical College Application Service. Analyses were replicated across specific sex and race/ethnicity applicant pools. Two results surfaced in the analyses. First, the strength of the labor market was not influential in explaining changes in applicant pool sizes for all applicants, but was strongly influential in explaining changes for black and Hispanic males. Increases of $1,000 in prevailing median wages produced a 1.6% decrease in the white male applicant pool, while 1% increases in the unemployment rate were associated with 4.5% and 3.1% increases in, respectively, the black and Hispanic male applicant pools. Second, labor market strength was a more important determinant in applications from males than in applications from females. Although stakeholders cannot directly influence the overall economic market, they can plan and prepare for fewer applications from males, especially those who are black and Hispanic, when the labor market is strong.

  20. New Results on Gain-Loss Asymmetry for Stock Markets Time Series

    Science.gov (United States)

    Grudziecki, M.; Gnatowska, E.; Karpio, K.; Orłowski, A.; Załuska-Kotur, M.

    2008-09-01

    A method called investment horizon approach was successfully used to analyze stock markets of many different countries. Here we apply a version of this method to study characteristics of the Polish Pioneer mutual funds. We decided to analyze Pioneer because of its longest involvement in investing on the Polish market. Moreover, it apparently manages the biggest amount of money among all similar institutions in Poland. We compare various types of Pioneer mutual funds, characterized by different financial instruments they invest in. Previously, investment horizon approach produced different characteristics of emerging markets as opposed to mature ones, providing a possible way to quantify stock market maturity. Here we generalize the above mentioned results for mutual funds of various types.

  1. Duality between Time Series and Networks

    Science.gov (United States)

    Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.

    2011-01-01

    Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093

  2. Long time series

    DEFF Research Database (Denmark)

    Hisdal, H.; Holmqvist, E.; Hyvärinen, V.

    Awareness that emission of greenhouse gases will raise the global temperature and change the climate has led to studies trying to identify such changes in long-term climate and hydrologic time series. This report, written by the......Awareness that emission of greenhouse gases will raise the global temperature and change the climate has led to studies trying to identify such changes in long-term climate and hydrologic time series. This report, written by the...

  3. A Course in Time Series Analysis

    CERN Document Server

    Peña, Daniel; Tsay, Ruey S

    2011-01-01

    New statistical methods and future directions of research in time series A Course in Time Series Analysis demonstrates how to build time series models for univariate and multivariate time series data. It brings together material previously available only in the professional literature and presents a unified view of the most advanced procedures available for time series model building. The authors begin with basic concepts in univariate time series, providing an up-to-date presentation of ARIMA models, including the Kalman filter, outlier analysis, automatic methods for building ARIMA models, a

  4. Kolmogorov Space in Time Series Data

    OpenAIRE

    Kanjamapornkul, K.; Pinčák, R.

    2016-01-01

    We provide the proof that the space of time series data is a Kolmogorov space with $T_{0}$-separation axiom using the loop space of time series data. In our approach we define a cyclic coordinate of intrinsic time scale of time series data after empirical mode decomposition. A spinor field of time series data comes from the rotation of data around price and time axis by defining a new extradimension to time series data. We show that there exist hidden eight dimensions in Kolmogorov space for ...

  5. Multiscale Symbolic Phase Transfer Entropy in Financial Time Series Classification

    Science.gov (United States)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    We address the challenge of classifying financial time series via a newly proposed multiscale symbolic phase transfer entropy (MSPTE). Using MSPTE method, we succeed to quantify the strength and direction of information flow between financial systems and classify financial time series, which are the stock indices from Europe, America and China during the period from 2006 to 2016 and the stocks of banking, aviation industry and pharmacy during the period from 2007 to 2016, simultaneously. The MSPTE analysis shows that the value of symbolic phase transfer entropy (SPTE) among stocks decreases with the increasing scale factor. It is demonstrated that MSPTE method can well divide stocks into groups by areas and industries. In addition, it can be concluded that the MSPTE analysis quantify the similarity among the stock markets. The symbolic phase transfer entropy (SPTE) between the two stocks from the same area is far less than the SPTE between stocks from different areas. The results also indicate that four stocks from America and Europe have relatively high degree of similarity and the stocks of banking and pharmaceutical industry have higher similarity for CA. It is worth mentioning that the pharmaceutical industry has weaker particular market mechanism than banking and aviation industry.

  6. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    OpenAIRE

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  7. The Market Dynamics of Generic Medicines in the Private Sector of 19 Low and Middle Income Countries between 2001 and 2011: A Descriptive Time Series Analysis

    Science.gov (United States)

    Kaplan, Warren A.; Wirtz, Veronika J.; Stephens, Peter

    2013-01-01

    This observational study investigates the private sector, retail pharmaceutical market of 19 low and middle income countries (LMICs) in Latin America, Asia and the Middle East/South Africa analyzing the relationships between volume market share of generic and originator medicines over a time series from 2001 to 2011. Over 5000 individual pharmaceutical substances were divided into generic (unbranded generic, branded generic medicines) and originator categories for each country, including the United States as a comparator. In 9 selected LMICs, the market share of those originator substances with the largest decrease over time was compared to the market share of their counterpart generic versions. Generic medicines (branded generic plus unbranded generic) represent between 70 and 80% of market share in the private sector of these LMICs which exceeds that of most European countries. Branded generic medicine market share is higher than that of unbranded generics in all three regions and this is in contrast to the U.S. Although switching from an originator to its generic counterpart can save money, this narrative in reality is complex at the level of individual medicines. In some countries, the market behavior of some originator medicines that showed the most temporal decrease, showed switching to their generic counterpart. In other countries such as in the Middle East/South Africa and Asia, the loss of these originators was not accompanied by any change at all in market share of the equivalent generic version. For those countries with a significant increase in generic medicines market share and/or with evidence of comprehensive “switching” to generic versions, notably in Latin America, it would be worthwhile to establish cause-effect relationships between pharmaceutical policies and uptake of generic medicines. The absence of change in the generic medicines market share in other countries suggests that, at a minimum, generic medicines have not been strongly

  8. The market dynamics of generic medicines in the private sector of 19 low and middle income countries between 2001 and 2011: a descriptive time series analysis.

    Science.gov (United States)

    Kaplan, Warren A; Wirtz, Veronika J; Stephens, Peter

    2013-01-01

    This observational study investigates the private sector, retail pharmaceutical market of 19 low and middle income countries (LMICs) in Latin America, Asia and the Middle East/South Africa analyzing the relationships between volume market share of generic and originator medicines over a time series from 2001 to 2011. Over 5000 individual pharmaceutical substances were divided into generic (unbranded generic, branded generic medicines) and originator categories for each country, including the United States as a comparator. In 9 selected LMICs, the market share of those originator substances with the largest decrease over time was compared to the market share of their counterpart generic versions. Generic medicines (branded generic plus unbranded generic) represent between 70 and 80% of market share in the private sector of these LMICs which exceeds that of most European countries. Branded generic medicine market share is higher than that of unbranded generics in all three regions and this is in contrast to the U.S. Although switching from an originator to its generic counterpart can save money, this narrative in reality is complex at the level of individual medicines. In some countries, the market behavior of some originator medicines that showed the most temporal decrease, showed switching to their generic counterpart. In other countries such as in the Middle East/South Africa and Asia, the loss of these originators was not accompanied by any change at all in market share of the equivalent generic version. For those countries with a significant increase in generic medicines market share and/or with evidence of comprehensive "switching" to generic versions, notably in Latin America, it would be worthwhile to establish cause-effect relationships between pharmaceutical policies and uptake of generic medicines. The absence of change in the generic medicines market share in other countries suggests that, at a minimum, generic medicines have not been strongly promoted.

  9. The market dynamics of generic medicines in the private sector of 19 low and middle income countries between 2001 and 2011: a descriptive time series analysis.

    Directory of Open Access Journals (Sweden)

    Warren A Kaplan

    Full Text Available This observational study investigates the private sector, retail pharmaceutical market of 19 low and middle income countries (LMICs in Latin America, Asia and the Middle East/South Africa analyzing the relationships between volume market share of generic and originator medicines over a time series from 2001 to 2011. Over 5000 individual pharmaceutical substances were divided into generic (unbranded generic, branded generic medicines and originator categories for each country, including the United States as a comparator. In 9 selected LMICs, the market share of those originator substances with the largest decrease over time was compared to the market share of their counterpart generic versions. Generic medicines (branded generic plus unbranded generic represent between 70 and 80% of market share in the private sector of these LMICs which exceeds that of most European countries. Branded generic medicine market share is higher than that of unbranded generics in all three regions and this is in contrast to the U.S. Although switching from an originator to its generic counterpart can save money, this narrative in reality is complex at the level of individual medicines. In some countries, the market behavior of some originator medicines that showed the most temporal decrease, showed switching to their generic counterpart. In other countries such as in the Middle East/South Africa and Asia, the loss of these originators was not accompanied by any change at all in market share of the equivalent generic version. For those countries with a significant increase in generic medicines market share and/or with evidence of comprehensive "switching" to generic versions, notably in Latin America, it would be worthwhile to establish cause-effect relationships between pharmaceutical policies and uptake of generic medicines. The absence of change in the generic medicines market share in other countries suggests that, at a minimum, generic medicines have not been

  10. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  11. Analysis of the impact of crude oil price fluctuations on China's stock market in different periods-Based on time series network model

    Science.gov (United States)

    An, Yang; Sun, Mei; Gao, Cuixia; Han, Dun; Li, Xiuming

    2018-02-01

    This paper studies the influence of Brent oil price fluctuations on the stock prices of China's two distinct blocks, namely, the petrochemical block and the electric equipment and new energy block, applying the Shannon entropy of information theory. The co-movement trend of crude oil price and stock prices is divided into different fluctuation patterns with the coarse-graining method. Then, the bivariate time series network model is established for the two blocks stock in five different periods. By joint analysis of the network-oriented metrics, the key modes and underlying evolutionary mechanisms were identified. The results show that the both networks have different fluctuation characteristics in different periods. Their co-movement patterns are clustered in some key modes and conversion intermediaries. The study not only reveals the lag effect of crude oil price fluctuations on the stock in Chinese industry blocks but also verifies the necessity of research on special periods, and suggests that the government should use different energy policies to stabilize market volatility in different periods. A new way is provided to study the unidirectional influence between multiple variables or complex time series.

  12. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    Science.gov (United States)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  13. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    International Nuclear Information System (INIS)

    Almog, Assaf; Garlaschelli, Diego

    2014-01-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information. (paper)

  14. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory

    Directory of Open Access Journals (Sweden)

    Haimin Yang

    2017-01-01

    Full Text Available Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam, for long short-term memory (LSTM to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.

  15. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.

    Science.gov (United States)

    Yang, Haimin; Pan, Zhisong; Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.

  16. International Work-Conference on Time Series

    CERN Document Server

    Pomares, Héctor

    2016-01-01

    This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.

  17. Future mission studies: Forecasting solar flux directly from its chaotic time series

    Science.gov (United States)

    Ashrafi, S.

    1991-01-01

    The mathematical structure of the programs written to construct a nonlinear predictive model to forecast solar flux directly from its time series without reference to any underlying solar physics is presented. This method and the programs are written so that one could apply the same technique to forecast other chaotic time series, such as geomagnetic data, attitude and orbit data, and even financial indexes and stock market data. Perhaps the most important application of this technique to flight dynamics is to model Goddard Trajectory Determination System (GTDS) output of residues between observed position of spacecraft and calculated position with no drag (drag flag = off). This would result in a new model of drag working directly from observed data.

  18. Phase correlation of foreign exchange time series

    Science.gov (United States)

    Wu, Ming-Chya

    2007-03-01

    Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986-1989 and 1990-1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.

  19. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  20. Graphical Data Analysis on the Circle: Wrap-Around Time Series Plots for (Interrupted) Time Series Designs.

    Science.gov (United States)

    Rodgers, Joseph Lee; Beasley, William Howard; Schuelke, Matthew

    2014-01-01

    Many data structures, particularly time series data, are naturally seasonal, cyclical, or otherwise circular. Past graphical methods for time series have focused on linear plots. In this article, we move graphical analysis onto the circle. We focus on 2 particular methods, one old and one new. Rose diagrams are circular histograms and can be produced in several different forms using the RRose software system. In addition, we propose, develop, illustrate, and provide software support for a new circular graphical method, called Wrap-Around Time Series Plots (WATS Plots), which is a graphical method useful to support time series analyses in general but in particular in relation to interrupted time series designs. We illustrate the use of WATS Plots with an interrupted time series design evaluating the effect of the Oklahoma City bombing on birthrates in Oklahoma County during the 10 years surrounding the bombing of the Murrah Building in Oklahoma City. We compare WATS Plots with linear time series representations and overlay them with smoothing and error bands. Each method is shown to have advantages in relation to the other; in our example, the WATS Plots more clearly show the existence and effect size of the fertility differential.

  1. Correlation filtering in financial time series (Invited Paper)

    Science.gov (United States)

    Aste, T.; Di Matteo, Tiziana; Tumminello, M.; Mantegna, R. N.

    2005-05-01

    We apply a method to filter relevant information from the correlation coefficient matrix by extracting a network of relevant interactions. This method succeeds to generate networks with the same hierarchical structure of the Minimum Spanning Tree but containing a larger amount of links resulting in a richer network topology allowing loops and cliques. In Tumminello et al.,1 we have shown that this method, applied to a financial portfolio of 100 stocks in the USA equity markets, is pretty efficient in filtering relevant information about the clustering of the system and its hierarchical structure both on the whole system and within each cluster. In particular, we have found that triangular loops and 4 element cliques have important and significant relations with the market structure and properties. Here we apply this filtering procedure to the analysis of correlation in two different kind of interest rate time series (16 Eurodollars and 34 US interest rates).

  2. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  3. Time Series with Long Memory

    OpenAIRE

    西埜, 晴久

    2004-01-01

    The paper investigates an application of long-memory processes to economic time series. We show properties of long-memory processes, which are motivated to model a long-memory phenomenon in economic time series. An FARIMA model is described as an example of long-memory model in statistical terms. The paper explains basic limit theorems and estimation methods for long-memory processes in order to apply long-memory models to economic time series.

  4. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Normalizing the causality between time series

    Science.gov (United States)

    Liang, X. San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  6. Network structure of multivariate time series.

    Science.gov (United States)

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  7. Forecast of electric power market to short-term: a time series approcah

    International Nuclear Information System (INIS)

    Costa, Roberio Neves Pelinca da.

    1994-01-01

    Three different time series approaches are analysed by this dissertation in the Brazilian electricity markert context. The aim is to compare the predictive performance of these approaches from a simulated exercise using the main series of the Brazilian consumption of electricity: Total Consumption, Industrial Consumption, Residencial Consumption and Commercial Consumption. One concludes that these appraches offer an enormous potentiality to the short-term planning system of the Electric Sector. Among the univariate models, the results for the analysed period point out that the forecast produced by Holt-Winter's models are more accurate than those produced by ARIMA and structural models. When explanatory variables are introduced in the last models, one can notice, in general, an improvement in the predictive performance of the models, although there is no sufficient evidence to consider that they are superior to Holt-Winter's models. The models with explanatory variables can be particularly useful, however, when one intends either to build scenarios or to study the effects of some variables on the consumption of electricity. (author). 73 refs., 19 figs., 13 tabs

  8. Data mining in time series databases

    CERN Document Server

    Kandel, Abraham; Bunke, Horst

    2004-01-01

    Adding the time dimension to real-world databases produces Time SeriesDatabases (TSDB) and introduces new aspects and difficulties to datamining and knowledge discovery. This book covers the state-of-the-artmethodology for mining time series databases. The novel data miningmethods presented in the book include techniques for efficientsegmentation, indexing, and classification of noisy and dynamic timeseries. A graph-based method for anomaly detection in time series isdescribed and the book also studies the implications of a novel andpotentially useful representation of time series as strings. Theproblem of detecting changes in data mining models that are inducedfrom temporal databases is additionally discussed.

  9. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  10. Visual time series analysis

    DEFF Research Database (Denmark)

    Fischer, Paul; Hilbert, Astrid

    2012-01-01

    We introduce a platform which supplies an easy-to-handle, interactive, extendable, and fast analysis tool for time series analysis. In contrast to other software suits like Maple, Matlab, or R, which use a command-line-like interface and where the user has to memorize/look-up the appropriate...... commands, our application is select-and-click-driven. It allows to derive many different sequences of deviations for a given time series and to visualize them in different ways in order to judge their expressive power and to reuse the procedure found. For many transformations or model-ts, the user may...... choose between manual and automated parameter selection. The user can dene new transformations and add them to the system. The application contains efficient implementations of advanced and recent techniques for time series analysis including techniques related to extreme value analysis and filtering...

  11. A Review of Subsequence Time Series Clustering

    Directory of Open Access Journals (Sweden)

    Seyedjamal Zolhavarieh

    2014-01-01

    Full Text Available Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.

  12. A review of subsequence time series clustering.

    Science.gov (United States)

    Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.

  13. A Review of Subsequence Time Series Clustering

    Science.gov (United States)

    Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332

  14. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  15. Time Series Analysis and Forecasting by Example

    CERN Document Server

    Bisgaard, Soren

    2011-01-01

    An intuition-based approach enables you to master time series analysis with ease Time Series Analysis and Forecasting by Example provides the fundamental techniques in time series analysis using various examples. By introducing necessary theory through examples that showcase the discussed topics, the authors successfully help readers develop an intuitive understanding of seemingly complicated time series models and their implications. The book presents methodologies for time series analysis in a simplified, example-based approach. Using graphics, the authors discuss each presented example in

  16. Time series with tailored nonlinearities

    Science.gov (United States)

    Räth, C.; Laut, I.

    2015-10-01

    It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.

  17. Clustering of financial time series

    Science.gov (United States)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  18. A study of real-time content marketing : formulating real-time content marketing based on content, search and social media

    OpenAIRE

    Nguyen, Thi Kim Duyen

    2015-01-01

    The primary objective of this research is to understand profoundly the new concept of content marketing – real-time content marketing on the aspect of the digital marketing experts. Particularly, the research will focus on the real-time content marketing theories and how to build real-time content marketing strategy based on content, search and social media. It also finds out how marketers measure and keep track of conversion rates of their real-time content marketing plan. Practically, th...

  19. Multifractal analysis of the Korean agricultural market

    Science.gov (United States)

    Kim, Hongseok; Oh, Gabjin; Kim, Seunghwan

    2011-11-01

    We have studied the long-term memory effects of the Korean agricultural market using the detrended fluctuation analysis (DFA) method. In general, the return time series of various financial data, including stock indices, foreign exchange rates, and commodity prices, are uncorrelated in time, while the volatility time series are strongly correlated. However, we found that the return time series of Korean agricultural commodity prices are anti-correlated in time, while the volatility time series are correlated. The n-point correlations of time series were also examined, and it was found that a multifractal structure exists in Korean agricultural market prices.

  20. Danish Drama Series: An Export Success Cradled on the Domestic Market

    DEFF Research Database (Denmark)

    Degn, Hans-Peter; Jensen, Pia Majbritt; Krogager, Stinne Gunder Strøm

    Despite the last twenty years’ intense competition on the Danish TV market and the resulting channel proliferation and dispersal of audiences, the license fee-funded public service broadcaster DR has managed to create and sustain a Sunday evening slot that attracts “the entire nation”. With an av......Despite the last twenty years’ intense competition on the Danish TV market and the resulting channel proliferation and dispersal of audiences, the license fee-funded public service broadcaster DR has managed to create and sustain a Sunday evening slot that attracts “the entire nation......”. With an average audience share of no less than 60 (peaking at almost 90) per cent of viewers, drama series broadcast in the slot between 8pm and 9pm beckons a substantial part of the nation, Sunday after Sunday. Subsequently, many of this slot’s recent series, such as Forbrydelsen [The Killing], Borgen and Bron...... market over the past 20 years. We shall do so by investigating the historical development of DR’s Drama Division and the series’ domestic viewing profiles and settings. According to theories on media economy, media geography and media reception, non-Anglophone audio-visual content rarely exports outside...

  1. Data Mining Smart Energy Time Series

    Directory of Open Access Journals (Sweden)

    Janina POPEANGA

    2015-07-01

    Full Text Available With the advent of smart metering technology the amount of energy data will increase significantly and utilities industry will have to face another big challenge - to find relationships within time-series data and even more - to analyze such huge numbers of time series to find useful patterns and trends with fast or even real-time response. This study makes a small review of the literature in the field, trying to demonstrate how essential is the application of data mining techniques in the time series to make the best use of this large quantity of data, despite all the difficulties. Also, the most important Time Series Data Mining techniques are presented, highlighting their applicability in the energy domain.

  2. Complex dynamic behaviors of oriented percolation-based financial time series and Hang Seng index

    International Nuclear Information System (INIS)

    Niu, Hongli; Wang, Jun

    2013-01-01

    Highlights: • We develop a financial time series model by two-dimensional oriented percolation system. • We investigate the statistical behaviors of returns for HSI and the financial model by chaos-exploring methods. • We forecast the phase point of reconstructed phase space by RBF neural network. -- Abstract: We develop a financial price model by the two-dimensional oriented (directed) percolation system. The oriented percolation model is a directed variant of ordinary (isotropic) percolation, and it is applied to describe the fluctuations of stock prices. In this work, we assume that the price fluctuations result from the participants’ investment attitudes toward the market, and we investigate the information spreading among the traders and the corresponding effect on the price fluctuations. We study the complex dynamic behaviors of return time series of the model by using the multiaspect chaos-exploring methods. And we also explore the corresponding behaviors of the actual market index (Hang Seng Index) for comparison. Further, we introduce the radial basic function (RBF) neural network to train and forecast the phase point of reconstructed phase space

  3. Predicting chaotic time series

    International Nuclear Information System (INIS)

    Farmer, J.D.; Sidorowich, J.J.

    1987-01-01

    We present a forecasting technique for chaotic data. After embedding a time series in a state space using delay coordinates, we ''learn'' the induced nonlinear mapping using local approximation. This allows us to make short-term predictions of the future behavior of a time series, using information based only on past values. We present an error estimate for this technique, and demonstrate its effectiveness by applying it to several examples, including data from the Mackey-Glass delay differential equation, Rayleigh-Benard convection, and Taylor-Couette flow

  4. Measuring multiscaling in financial time-series

    International Nuclear Information System (INIS)

    Buonocore, R.J.; Aste, T.; Di Matteo, T.

    2016-01-01

    We discuss the origin of multiscaling in financial time-series and investigate how to best quantify it. Our methodology consists in separating the different sources of measured multifractality by analyzing the multi/uni-scaling behavior of synthetic time-series with known properties. We use the results from the synthetic time-series to interpret the measure of multifractality of real log-returns time-series. The main finding is that the aggregation horizon of the returns can introduce a strong bias effect on the measure of multifractality. This effect can become especially important when returns distributions have power law tails with exponents in the range (2, 5). We discuss the right aggregation horizon to mitigate this bias.

  5. Applied time series analysis

    CERN Document Server

    Woodward, Wayne A; Elliott, Alan C

    2011-01-01

    ""There is scarcely a standard technique that the reader will find left out … this book is highly recommended for those requiring a ready introduction to applicable methods in time series and serves as a useful resource for pedagogical purposes.""-International Statistical Review (2014), 82""Current time series theory for practice is well summarized in this book.""-Emmanuel Parzen, Texas A&M University""What an extraordinary range of topics covered, all very insightfully. I like [the authors'] innovations very much, such as the AR factor table.""-David Findley, U.S. Census Bureau (retired)""…

  6. Volatility Behaviors of Financial Time Series by Percolation System on Sierpinski Carpet Lattice

    Science.gov (United States)

    Pei, Anqi; Wang, Jun

    2015-01-01

    The financial time series is simulated and investigated by the percolation system on the Sierpinski carpet lattice, where percolation is usually employed to describe the behavior of connected clusters in a random graph, and the Sierpinski carpet lattice is a graph which corresponds the fractal — Sierpinski carpet. To study the fluctuation behavior of returns for the financial model and the Shanghai Composite Index, we establish a daily volatility measure — multifractal volatility (MFV) measure to obtain MFV series, which have long-range cross-correlations with squared daily return series. The autoregressive fractionally integrated moving average (ARFIMA) model is used to analyze the MFV series, which performs better when compared to other volatility series. By a comparative study of the multifractality and volatility analysis of the data, the simulation data of the proposed model exhibits very similar behaviors to those of the real stock index, which indicates somewhat rationality of the model to the market application.

  7. Entropic Analysis of Electromyography Time Series

    Science.gov (United States)

    Kaufman, Miron; Sung, Paul

    2005-03-01

    We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.

  8. Multivariate multiscale entropy of financial markets

    Science.gov (United States)

    Lu, Yunfan; Wang, Jun

    2017-11-01

    In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.

  9. Quantifying memory in complex physiological time-series.

    Science.gov (United States)

    Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.

  10. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...... series forecasting models....

  11. Transportation Energy Futures Series: Vehicle Technology Deployment Pathways: An Examination of Timing and Investment Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Plotkin, S.; Stephens, T.; McManus, W.

    2013-03-01

    Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could be used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  12. Transportation Energy Futures Series. Vehicle Technology Deployment Pathways. An Examination of Timing and Investment Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Plotkin, Steve [Argonne National Lab. (ANL), Argonne, IL (United States); Stephens, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); McManus, Walter [Oakland Univ., Rochester, MI (United States)

    2013-03-01

    Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could be used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  13. DOES MARKET TIMING DRIVE CAPITAL STRUCTURE? EMPIRICAL EVIDENCE FROM AN EMERGING MARKET

    Directory of Open Access Journals (Sweden)

    Sibel Çelik

    2013-01-01

    Full Text Available The purpose of this study is to test how equity market timing affects capital structure from the perspective of IPO (Initial Public Offering event in ISE for the period between 1999-2008. Our dataset comprises of all firms (75 firms that went public from the period of January 1999 to December 2008 in Turkey that are available in ISE database. We analyse the market timing theory by applying cross sectional regression method. For this purpose, first, we test the impact of market timing on the amount of equity issued by IPO firms. Second we examine the impact of market timing on capital structure. We conclude that market timing theory is not valid for Turkey.

  14. Statistical criteria for characterizing irradiance time series.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  15. The Hog Cycle of Law Professors: An Econometric Time Series Analysis of the Entry-Level Job Market in Legal Academia.

    Science.gov (United States)

    Engel, Christoph; Hamann, Hanjo

    2016-01-01

    The (German) market for law professors fulfils the conditions for a hog cycle: In the short run, supply cannot be extended or limited; future law professors must be hired soon after they first present themselves, or leave the market; demand is inelastic. Using a comprehensive German dataset, we show that the number of market entries today is negatively correlated with the number of market entries eight years ago. This suggests short-sighted behavior of young scholars at the time when they decide to prepare for the market. Using our statistical model, we make out-of-sample predictions for the German academic market in law until 2020.

  16. Homogenising time series: beliefs, dogmas and facts

    Science.gov (United States)

    Domonkos, P.

    2011-06-01

    In the recent decades various homogenisation methods have been developed, but the real effects of their application on time series are still not known sufficiently. The ongoing COST action HOME (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As a part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever before to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. Empirical results show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities often have similar statistical characteristics than natural changes caused by climatic variability, thus the pure application of the classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality than in raw time series. Some problems around detecting multiple structures of inhomogeneities, as well as that of time series comparisons within homogenisation procedures are discussed briefly in the study.

  17. Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study

    Science.gov (United States)

    Michaels, Anthony F.; Knap, Anthony H.

    1992-01-01

    Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.

  18. Multivariate Time Series Decomposition into Oscillation Components.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-08-01

    Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

  19. Forecasting Enrollments with Fuzzy Time Series.

    Science.gov (United States)

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  20. Weighted multiscale Rényi permutation entropy of nonlinear time series

    Science.gov (United States)

    Chen, Shijian; Shang, Pengjian; Wu, Yue

    2018-04-01

    In this paper, based on Rényi permutation entropy (RPE), which has been recently suggested as a relative measure of complexity in nonlinear systems, we propose multiscale Rényi permutation entropy (MRPE) and weighted multiscale Rényi permutation entropy (WMRPE) to quantify the complexity of nonlinear time series over multiple time scales. First, we apply MPRE and WMPRE to the synthetic data and make a comparison of modified methods and RPE. Meanwhile, the influence of the change of parameters is discussed. Besides, we interpret the necessity of considering not only multiscale but also weight by taking the amplitude into account. Then MRPE and WMRPE methods are employed to the closing prices of financial stock markets from different areas. By observing the curves of WMRPE and analyzing the common statistics, stock markets are divided into 4 groups: (1) DJI, S&P500, and HSI, (2) NASDAQ and FTSE100, (3) DAX40 and CAC40, and (4) ShangZheng and ShenCheng. Results show that the standard deviations of weighted methods are smaller, showing WMRPE is able to ensure the results more robust. Besides, WMPRE can provide abundant dynamical properties of complex systems, and demonstrate the intrinsic mechanism.

  1. Nonlinear Analysis on Cross-Correlation of Financial Time Series by Continuum Percolation System

    Science.gov (United States)

    Niu, Hongli; Wang, Jun

    We establish a financial price process by continuum percolation system, in which we attribute price fluctuations to the investors’ attitudes towards the financial market, and consider the clusters in continuum percolation as the investors share the same investment opinion. We investigate the cross-correlations in two return time series, and analyze the multifractal behaviors in this relationship. Further, we study the corresponding behaviors for the real stock indexes of SSE and HSI as well as the liquid stocks pair of SPD and PAB by comparison. To quantify the multifractality in cross-correlation relationship, we employ multifractal detrended cross-correlation analysis method to perform an empirical research for the simulation data and the real markets data.

  2. Forecasting Cryptocurrencies Financial Time Series

    DEFF Research Database (Denmark)

    Catania, Leopoldo; Grassi, Stefano; Ravazzolo, Francesco

    2018-01-01

    This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely...

  3. Forecasting Cryptocurrencies Financial Time Series

    OpenAIRE

    Catania, Leopoldo; Grassi, Stefano; Ravazzolo, Francesco

    2018-01-01

    This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely on Dynamic Model Averaging to combine a large set of univariate Dynamic Linear Models and several multivariate Vector Autoregressive models with different forms of time variation. We find statistical si...

  4. Fractality of profit landscapes and validation of time series models for stock prices

    Science.gov (United States)

    Yi, Il Gu; Oh, Gabjin; Kim, Beom Jun

    2013-08-01

    We apply a simple trading strategy for various time series of real and artificial stock prices to understand the origin of fractality observed in the resulting profit landscapes. The strategy contains only two parameters p and q, and the sell (buy) decision is made when the log return is larger (smaller) than p (-q). We discretize the unit square (p,q) ∈ [0,1] × [0,1] into the N × N square grid and the profit Π(p,q) is calculated at the center of each cell. We confirm the previous finding that local maxima in profit landscapes are scattered in a fractal-like fashion: the number M of local maxima follows the power-law form M ˜ Na, but the scaling exponent a is found to differ for different time series. From comparisons of real and artificial stock prices, we find that the fat-tailed return distribution is closely related to the exponent a ≈ 1.6 observed for real stock markets. We suggest that the fractality of profit landscape characterized by a ≈ 1.6 can be a useful measure to validate time series model for stock prices.

  5. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  6. Time Series Analysis Forecasting and Control

    CERN Document Server

    Box, George E P; Reinsel, Gregory C

    2011-01-01

    A modernized new edition of one of the most trusted books on time series analysis. Since publication of the first edition in 1970, Time Series Analysis has served as one of the most influential and prominent works on the subject. This new edition maintains its balanced presentation of the tools for modeling and analyzing time series and also introduces the latest developments that have occurred n the field over the past decade through applications from areas such as business, finance, and engineering. The Fourth Edition provides a clearly written exploration of the key methods for building, cl

  7. Hierarchical Meta-Learning in Time Series Forecasting for Improved Interference-Less Machine Learning

    Directory of Open Access Journals (Sweden)

    David Afolabi

    2017-11-01

    Full Text Available The importance of an interference-less machine learning scheme in time series prediction is crucial, as an oversight can have a negative cumulative effect, especially when predicting many steps ahead of the currently available data. The on-going research on noise elimination in time series forecasting has led to a successful approach of decomposing the data sequence into component trends to identify noise-inducing information. The empirical mode decomposition method separates the time series/signal into a set of intrinsic mode functions ranging from high to low frequencies, which can be summed up to reconstruct the original data. The usual assumption that random noises are only contained in the high-frequency component has been shown not to be the case, as observed in our previous findings. The results from that experiment reveal that noise can be present in a low frequency component, and this motivates the newly-proposed algorithm. Additionally, to prevent the erosion of periodic trends and patterns within the series, we perform the learning of local and global trends separately in a hierarchical manner which succeeds in detecting and eliminating short/long term noise. The algorithm is tested on four datasets from financial market data and physical science data. The simulation results are compared with the conventional and state-of-the-art approaches for time series machine learning, such as the non-linear autoregressive neural network and the long short-term memory recurrent neural network, respectively. Statistically significant performance gains are recorded when the meta-learning algorithm for noise reduction is used in combination with these artificial neural networks. For time series data which cannot be decomposed into meaningful trends, applying the moving average method to create meta-information for guiding the learning process is still better than the traditional approach. Therefore, this new approach is applicable to the forecasting

  8. Costationarity of Locally Stationary Time Series Using costat

    OpenAIRE

    Cardinali, Alessandro; Nason, Guy P.

    2013-01-01

    This article describes the R package costat. This package enables a user to (i) perform a test for time series stationarity; (ii) compute and plot time-localized autocovariances, and (iii) to determine and explore any costationary relationship between two locally stationary time series. Two locally stationary time series are said to be costationary if there exists two time-varying combination functions such that the linear combination of the two series with the functions produces another time...

  9. Modelling tourism demand in Madeira since 1946: and historical overview based on a time series approach

    Directory of Open Access Journals (Sweden)

    António Manuel Martins de Almeida

    2016-06-01

    Full Text Available Tourism is the leading economic sector in most islands and for that reason market trends are closely monitored due to the huge impacts of relatively minor changes in the demand patterns. An interesting line of research regarding the analysis of market trends concerns the examination of time series to get an historical overview of the data patterns. The modelling of demand patterns is obviously dependent on data availability, and the measurement of changes in demand patterns is quite often focused on a few decades. In this paper, we use long-term time-series data to analyse the evolution of the main markets in Madeira, by country of origin, in order to re-examine the Butler life cycle model, based on data available from 1946 onwards. This study is an opportunity to document the historical development of the industry in Madeira and to introduce the discussion about the rejuvenation of a mature destination. Tourism development in Madeira has experienced rapid growth until the late 90s, as one of the leading destinations in the European context. However, annual growth rates are not within acceptable ranges, which lead policy-makers and experts to recommend a thoughtfully assessment of the industry prospects.

  10. The economic analysis of power market architectures: application to real-time market design

    International Nuclear Information System (INIS)

    Saguan, M.

    2007-04-01

    This work contributes to the economic analysis of power market architectures. A modular framework is used to separate problems of market design in different modules. The work's goal is to study real-time market design. A two-stage market equilibrium model is used to analyse the two main real-time designs: the 'market' and the 'mechanism' (with penalty). Numerical simulations show that design applied in real-time is not neutral vis-a-vis of energy markets sequence and the competition dynamic. Designs using penalty (mechanisms) cause distortions, inefficiencies and can create barriers to entry. The size of distortions is given by the temporal position of the gate that closure the forward markets. This model has also allowed us to show the key role of real-time integration between zones and the importance of good harmonization between real-time designs of each zone. (author)

  11. An Alternative Framework for Time Series Decomposition and Forecastingand its Relevance for Portfolio Choice – A Comparative Study of the Indian Consumer Durable and Small Cap Sectors

    OpenAIRE

    SEN, Jaydip; DATTA CHAUDHURI, Tamal

    2016-01-01

    Abstract. One of the challenging research problems in the domain of time series analysis and forecasting is making efficient and robust prediction of stock market prices. With rapid development and evolution of sophisticated algorithms and with the availability of extremely fast computing platforms, it has now become possible to effectively extract, store, process and analyze high volume stock market time series data. Complex algorithms for forecasting are now available for speedy execution o...

  12. Detecting nonlinear structure in time series

    International Nuclear Information System (INIS)

    Theiler, J.

    1991-01-01

    We describe an approach for evaluating the statistical significance of evidence for nonlinearity in a time series. The formal application of our method requires the careful statement of a null hypothesis which characterizes a candidate linear process, the generation of an ensemble of ''surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against the null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. While some data sets very cleanly exhibit low-dimensional chaos, there are many cases where the evidence is sketchy and difficult to evaluate. We hope to provide a framework within which such claims of nonlinearity can be evaluated. 5 refs., 4 figs

  13. Introduction to time series and forecasting

    CERN Document Server

    Brockwell, Peter J

    2016-01-01

    This book is aimed at the reader who wishes to gain a working knowledge of time series and forecasting methods as applied to economics, engineering and the natural and social sciences. It assumes knowledge only of basic calculus, matrix algebra and elementary statistics. This third edition contains detailed instructions for the use of the professional version of the Windows-based computer package ITSM2000, now available as a free download from the Springer Extras website. The logic and tools of time series model-building are developed in detail. Numerous exercises are included and the software can be used to analyze and forecast data sets of the user's own choosing. The book can also be used in conjunction with other time series packages such as those included in R. The programs in ITSM2000 however are menu-driven and can be used with minimal investment of time in the computational details. The core of the book covers stationary processes, ARMA and ARIMA processes, multivariate time series and state-space mod...

  14. Detecting method for crude oil price fluctuation mechanism under different periodic time series

    International Nuclear Information System (INIS)

    Gao, Xiangyun; Fang, Wei; An, Feng; Wang, Yue

    2017-01-01

    Highlights: • We proposed the concept of autoregressive modes to indicate the fluctuation patterns. • We constructed transmission networks for studying the fluctuation mechanism. • There are different fluctuation mechanism under different periodic time series. • Only a few types of autoregressive modes control the fluctuations in crude oil price. • There are cluster effects during the fluctuation mechanism of autoregressive modes. - Abstract: Current existing literatures can characterize the long-term fluctuation of crude oil price time series, however, it is difficult to detect the fluctuation mechanism specifically under short term. Because each fluctuation pattern for one short period contained in a long-term crude oil price time series have dynamic characteristics of diversity; in other words, there exhibit various fluctuation patterns in different short periods and transmit to each other, which reflects the reputedly complicate and chaotic oil market. Thus, we proposed an incorporated method to detect the fluctuation mechanism, which is the evolution of the different fluctuation patterns over time from the complex network perspective. We divided crude oil price time series into segments using sliding time windows, and defined autoregressive modes based on regression models to indicate the fluctuation patterns of each segment. Hence, the transmissions between different types of autoregressive modes over time form a transmission network that contains rich dynamic information. We then capture transmission characteristics of autoregressive modes under different periodic time series through the structure features of the transmission networks. The results indicate that there are various autoregressive modes with significantly different statistical characteristics under different periodic time series. However, only a few types of autoregressive modes and transmission patterns play a major role in the fluctuation mechanism of the crude oil price, and these

  15. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  16. Frontiers in Time Series and Financial Econometrics

    OpenAIRE

    Ling, S.; McAleer, M.J.; Tong, H.

    2015-01-01

    __Abstract__ Two of the fastest growing frontiers in econometrics and quantitative finance are time series and financial econometrics. Significant theoretical contributions to financial econometrics have been made by experts in statistics, econometrics, mathematics, and time series analysis. The purpose of this special issue of the journal on “Frontiers in Time Series and Financial Econometrics” is to highlight several areas of research by leading academics in which novel methods have contrib...

  17. Modelling conditional heteroscedasticity in nonstationary series

    NARCIS (Netherlands)

    Cizek, P.; Cizek, P.; Härdle, W.K.; Weron, R.

    2011-01-01

    A vast amount of econometrical and statistical research deals with modeling financial time series and their volatility, which measures the dispersion of a series at a point in time (i.e., conditional variance). Although financial markets have been experiencing many shorter and longer periods of

  18. Scale-dependent intrinsic entropies of complex time series.

    Science.gov (United States)

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  19. Elements of nonlinear time series analysis and forecasting

    CERN Document Server

    De Gooijer, Jan G

    2017-01-01

    This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a “theorem-proof” format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods. The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods. To make the book as user friendly as possible...

  20. An Energy-Based Similarity Measure for Time Series

    Directory of Open Access Journals (Sweden)

    Pierre Brunagel

    2007-11-01

    Full Text Available A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004, is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED or the Pearson correlation coefficient (CC, SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.

  1. Multifractals in Western Major STOCK Markets Historical Volatilities in Times of Financial Crisis

    Science.gov (United States)

    Lahmiri, Salim

    In this paper, the generalized Hurst exponent is used to investigate multifractal properties of historical volatility (CHV) in stock market price and return series before, during and after 2008 financial crisis. Empirical results from NASDAQ, S&P500, TSE, CAC40, DAX, and FTSE stock market data show that there is strong evidence of multifractal patterns in HV of both price and return series. In addition, financial crisis deeply affected the behavior and degree of multifractality in volatility of Western financial markets at price and return levels.

  2. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  3. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  4. Multivariate Time Series Search

    Data.gov (United States)

    National Aeronautics and Space Administration — Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical...

  5. Analysing Stable Time Series

    National Research Council Canada - National Science Library

    Adler, Robert

    1997-01-01

    We describe how to take a stable, ARMA, time series through the various stages of model identification, parameter estimation, and diagnostic checking, and accompany the discussion with a goodly number...

  6. Year Ahead Demand Forecast of City Natural Gas Using Seasonal Time Series Methods

    Directory of Open Access Journals (Sweden)

    Mustafa Akpinar

    2016-09-01

    Full Text Available Consumption of natural gas, a major clean energy source, increases as energy demand increases. We studied specifically the Turkish natural gas market. Turkey’s natural gas consumption increased as well in parallel with the world‘s over the last decade. This consumption growth in Turkey has led to the formation of a market structure for the natural gas industry. This significant increase requires additional investments since a rise in consumption capacity is expected. One of the reasons for the consumption increase is the user-based natural gas consumption influence. This effect yields imbalances in demand forecasts and if the error rates are out of bounds, penalties may occur. In this paper, three univariate statistical methods, which have not been previously investigated for mid-term year-ahead monthly natural gas forecasting, are used to forecast natural gas demand in Turkey’s Sakarya province. Residential and low-consumption commercial data is used, which may contain seasonality. The goal of this paper is minimizing more or less gas tractions on mid-term consumption while improving the accuracy of demand forecasting. In forecasting models, seasonality and single variable impacts reinforce forecasts. This paper studies time series decomposition, Holt-Winters exponential smoothing and autoregressive integrated moving average (ARIMA methods. Here, 2011–2014 monthly data were prepared and divided into two series. The first series is 2011–2013 monthly data used for finding seasonal effects and model requirements. The second series is 2014 monthly data used for forecasting. For the ARIMA method, a stationary series was prepared and transformation process prior to forecasting was done. Forecasting results confirmed that as the computation complexity of the model increases, forecasting accuracy increases with lower error rates. Also, forecasting errors and the coefficients of determination values give more consistent results. Consequently

  7. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  8. Time Series Observations in the North Indian Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shenoy, D.M.; Naik, H.; Kurian, S.; Naqvi, S.W.A.; Khare, N.

    Ocean and the ongoing time series study (Candolim Time Series; CaTS) off Goa. In addition, this article also focuses on the new time series initiative in the Arabian Sea and the Bay of Bengal under Sustained Indian Ocean Biogeochemistry and Ecosystem...

  9. Time-Varying Market Integration and Expected Returns in Emerging Markets

    OpenAIRE

    de Jong, Frank; de Roon, Frans

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market. The level of integration is a time-varying variable that depends on the market value of the assets that can be held by domestic investors only versus the market value of the assets that can be traded freely. Our empirical analysis for 30 emerging markets shows that there are stro...

  10. Time Varying Market Integration and Expected Rteurns in Emerging Markets

    OpenAIRE

    Jong, F.C.J.M. de; Roon, F.A. de

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value of the assets that can be held by domestic investors only versus the market value of the assets that can be traded freely.Our empirical analysis for 30 emerging markets shows that there are strong...

  11. Time Varying Market Integration and Expected Rteurns in Emerging Markets

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; de Roon, F.A.

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value

  12. Geometric noise reduction for multivariate time series.

    Science.gov (United States)

    Mera, M Eugenia; Morán, Manuel

    2006-03-01

    We propose an algorithm for the reduction of observational noise in chaotic multivariate time series. The algorithm is based on a maximum likelihood criterion, and its goal is to reduce the mean distance of the points of the cleaned time series to the attractor. We give evidence of the convergence of the empirical measure associated with the cleaned time series to the underlying invariant measure, implying the possibility to predict the long run behavior of the true dynamics.

  13. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  14. Studies on time series applications in environmental sciences

    CERN Document Server

    Bărbulescu, Alina

    2016-01-01

    Time series analysis and modelling represent a large study field, implying the approach from the perspective of the time and frequency, with applications in different domains. Modelling hydro-meteorological time series is difficult due to the characteristics of these series, as long range dependence, spatial dependence, the correlation with other series. Continuous spatial data plays an important role in planning, risk assessment and decision making in environmental management. In this context, in this book we present various statistical tests and modelling techniques used for time series analysis, as well as applications to hydro-meteorological series from Dobrogea, a region situated in the south-eastern part of Romania, less studied till now. Part of the results are accompanied by their R code. .

  15. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  16. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  17. Global Population Density Grid Time Series Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...

  18. Prediction and Geometry of Chaotic Time Series

    National Research Council Canada - National Science Library

    Leonardi, Mary

    1997-01-01

    This thesis examines the topic of chaotic time series. An overview of chaos, dynamical systems, and traditional approaches to time series analysis is provided, followed by an examination of state space reconstruction...

  19. A novel model for Time-Series Data Clustering Based on piecewise SVD and BIRCH for Stock Data Analysis on Hadoop Platform

    Directory of Open Access Journals (Sweden)

    Ibgtc Bowala

    2017-06-01

    Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.

  20. Market timing and the debt-equity choice

    NARCIS (Netherlands)

    Elliot, W.B.; Koeter-Kant, J.; Warr, R.S.

    2008-01-01

    We test the market timing theory of capital structure using an earnings-based valuation model that allows us to separate equity mispricing from growth options and time-varying adverse selection; thus avoiding the multiple interpretations of book-to-market ratio. We find that equity market mispricing

  1. Sensor-Generated Time Series Events: A Definition Language

    Science.gov (United States)

    Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan

    2012-01-01

    There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.

  2. Time Series Analysis of Wheat flour Price Shocks in Pakistan: A Case Analysis

    OpenAIRE

    Asad Raza Abdi; Ali Hassan Halepoto; Aisha Bashir Shah; Faiz M. Shaikh

    2013-01-01

    The current research investigates the wheat flour Price Shocks in Pakistan: A case analysis. Data was collected by using secondary sources by using Time series Analysis, and data were analyzed by using SPSS-20 version. It was revealed that the price of wheat flour increases from last four decades, and trend of price shocks shows that due to certain market variation and supply and demand shocks also play a positive relationship in price shocks in the wheat prices. It was further revealed th...

  3. Can Chinese Mutual Fund Time Market Liquidity?

    OpenAIRE

    LI, Xiaoqing

    2012-01-01

    Extant researches have focused on mutual fund managers’ ability to time market returns or volatility. In this paper, the author offers a new perspective on the traditional timing issue by examining Chinese fund managers’ liquidity timing ability. Using the Chinese mutual fund database, the author finds little evidence that over the period from 2004 to 2012, fund managers cannot demonstrate the ability to time market liquidity in China, i.e., increase (reduce) market exposure in anticipation o...

  4. Correlation and multifractality in climatological time series

    International Nuclear Information System (INIS)

    Pedron, I T

    2010-01-01

    Climate can be described by statistical analysis of mean values of atmospheric variables over a period. It is possible to detect correlations in climatological time series and to classify its behavior. In this work the Hurst exponent, which can characterize correlation and persistence in time series, is obtained by using the Detrended Fluctuation Analysis (DFA) method. Data series of temperature, precipitation, humidity, solar radiation, wind speed, maximum squall, atmospheric pressure and randomic series are studied. Furthermore, the multifractality of such series is analyzed applying the Multifractal Detrended Fluctuation Analysis (MF-DFA) method. The results indicate presence of correlation (persistent character) in all climatological series and multifractality as well. A larger set of data, and longer, could provide better results indicating the universality of the exponents.

  5. Time Series Forecasting with Missing Values

    Directory of Open Access Journals (Sweden)

    Shin-Fu Wu

    2015-11-01

    Full Text Available Time series prediction has become more popular in various kinds of applications such as weather prediction, control engineering, financial analysis, industrial monitoring, etc. To deal with real-world problems, we are often faced with missing values in the data due to sensor malfunctions or human errors. Traditionally, the missing values are simply omitted or replaced by means of imputation methods. However, omitting those missing values may cause temporal discontinuity. Imputation methods, on the other hand, may alter the original time series. In this study, we propose a novel forecasting method based on least squares support vector machine (LSSVM. We employ the input patterns with the temporal information which is defined as local time index (LTI. Time series data as well as local time indexes are fed to LSSVM for doing forecasting without imputation. We compare the forecasting performance of our method with other imputation methods. Experimental results show that the proposed method is promising and is worth further investigations.

  6. Reconstruction of ensembles of coupled time-delay systems from time series.

    Science.gov (United States)

    Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P

    2014-06-01

    We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.

  7. Investigation of market efficiency and Financial Stability between S&P 500 and London Stock Exchange: Monthly and yearly Forecasting of Time Series Stock Returns using ARMA model

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Nassir Zadeh, Farzaneh

    2016-08-01

    We investigated the presence and changes in, long memory features in the returns and volatility dynamics of S&P 500 and London Stock Exchange using ARMA model. Recently, multifractal analysis has been evolved as an important way to explain the complexity of financial markets which can hardly be described by linear methods of efficient market theory. In financial markets, the weak form of the efficient market hypothesis implies that price returns are serially uncorrelated sequences. In other words, prices should follow a random walk behavior. The random walk hypothesis is evaluated against alternatives accommodating either unifractality or multifractality. Several studies find that the return volatility of stocks tends to exhibit long-range dependence, heavy tails, and clustering. Because stochastic processes with self-similarity possess long-range dependence and heavy tails, it has been suggested that self-similar processes be employed to capture these characteristics in return volatility modeling. The present study applies monthly and yearly forecasting of Time Series Stock Returns in S&P 500 and London Stock Exchange using ARMA model. The statistical analysis of S&P 500 shows that the ARMA model for S&P 500 outperforms the London stock exchange and it is capable for predicting medium or long horizons using real known values. The statistical analysis in London Stock Exchange shows that the ARMA model for monthly stock returns outperforms the yearly. ​A comparison between S&P 500 and London Stock Exchange shows that both markets are efficient and have Financial Stability during periods of boom and bust.

  8. Eat, drink and gamble: marketing messages about 'risky' products in an Australian major sporting series.

    Science.gov (United States)

    Lindsay, Sophie; Thomas, Samantha; Lewis, Sophie; Westberg, Kate; Moodie, Rob; Jones, Sandra

    2013-08-05

    To investigate the alcohol, gambling, and unhealthy food marketing strategies during a nationally televised, free to air, sporting series in Australia. Using the Australian National Rugby League 2012 State of Origin three-game series, we conducted a mixed methods content analysis of the frequency, duration, placement and content of advertising strategies, comparing these strategies both within and across the three games. There were a total of 4445 episodes (mean = 1481.67, SD = 336.58), and 233.23 minutes (mean = 77.74, SD = 7.31) of marketing for alcoholic beverages, gambling products and unhealthy foods and non-alcoholic beverages during the 360 minutes of televised coverage of the three State of Origin 2012 games. This included an average per game of 1354 episodes (SD = 368.79) and 66.29 minutes (SD = 7.62) of alcohol marketing; 110.67 episodes (SD = 43.89), and 8.72 minutes (SD = 1.29) of gambling marketing; and 17 episodes (SD = 7.55), and 2.74 minutes (SD = 0.78) of unhealthy food and beverage marketing. Content analysis revealed that there was a considerable embedding of product marketing within the match play, including within match commentary, sporting equipment, and special replays. Sport is increasingly used as a vehicle for the promotion of range of 'risky consumption' products. This study raises important ethical and health policy questions about the extent and impact of saturation and incidental marketing strategies on health and wellbeing, the transparency of embedded marketing strategies, and how these strategies may influence product consumption.

  9. The analysis of time series: an introduction

    National Research Council Canada - National Science Library

    Chatfield, Christopher

    1989-01-01

    .... A variety of practical examples are given to support the theory. The book covers a wide range of time-series topics, including probability models for time series, Box-Jenkins forecasting, spectral analysis, linear systems and system identification...

  10. Allan deviation analysis of financial return series

    Science.gov (United States)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  11. Time series modeling in traffic safety research.

    Science.gov (United States)

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Time series prediction: statistical and neural techniques

    Science.gov (United States)

    Zahirniak, Daniel R.; DeSimio, Martin P.

    1996-03-01

    In this paper we compare the performance of nonlinear neural network techniques to those of linear filtering techniques in the prediction of time series. Specifically, we compare the results of using the nonlinear systems, known as multilayer perceptron and radial basis function neural networks, with the results obtained using the conventional linear Wiener filter, Kalman filter and Widrow-Hoff adaptive filter in predicting future values of stationary and non- stationary time series. Our results indicate the performance of each type of system is heavily dependent upon the form of the time series being predicted and the size of the system used. In particular, the linear filters perform adequately for linear or near linear processes while the nonlinear systems perform better for nonlinear processes. Since the linear systems take much less time to be developed, they should be tried prior to using the nonlinear systems when the linearity properties of the time series process are unknown.

  13. Effectiveness of Multivariate Time Series Classification Using Shapelets

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available Typically, time series classifiers require signal pre-processing (filtering signals from noise and artifact removal, etc., enhancement of signal features (amplitude, frequency, spectrum, etc., classification of signal features in space using the classical techniques and classification algorithms of multivariate data. We consider a method of classifying time series, which does not require enhancement of the signal features. The method uses the shapelets of time series (time series shapelets i.e. small fragments of this series, which reflect properties of one of its classes most of all.Despite the significant number of publications on the theory and shapelet applications for classification of time series, the task to evaluate the effectiveness of this technique remains relevant. An objective of this publication is to study the effectiveness of a number of modifications of the original shapelet method as applied to the multivariate series classification that is a littlestudied problem. The paper presents the problem statement of multivariate time series classification using the shapelets and describes the shapelet–based basic method of binary classification, as well as various generalizations and proposed modification of the method. It also offers the software that implements a modified method and results of computational experiments confirming the effectiveness of the algorithmic and software solutions.The paper shows that the modified method and the software to use it allow us to reach the classification accuracy of about 85%, at best. The shapelet search time increases in proportion to input data dimension.

  14. Time-series-analysis techniques applied to nuclear-material accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-05-01

    This document is designed to introduce the reader to the applications of Time Series Analysis techniques to Nuclear Material Accountability data. Time series analysis techniques are designed to extract information from a collection of random variables ordered by time by seeking to identify any trends, patterns, or other structure in the series. Since nuclear material accountability data is a time series, one can extract more information using time series analysis techniques than by using other statistical techniques. Specifically, the objective of this document is to examine the applicability of time series analysis techniques to enhance loss detection of special nuclear materials. An introductory section examines the current industry approach which utilizes inventory differences. The error structure of inventory differences is presented. Time series analysis techniques discussed include the Shewhart Control Chart, the Cumulative Summation of Inventory Differences Statistics (CUSUM) and the Kalman Filter and Linear Smoother

  15. Clinical and epidemiological rounds. Time series

    Directory of Open Access Journals (Sweden)

    León-Álvarez, Alba Luz

    2016-07-01

    Full Text Available Analysis of time series is a technique that implicates the study of individuals or groups observed in successive moments in time. This type of analysis allows the study of potential causal relationships between different variables that change over time and relate to each other. It is the most important technique to make inferences about the future, predicting, on the basis or what has happened in the past and it is applied in different disciplines of knowledge. Here we discuss different components of time series, the analysis technique and specific examples in health research.

  16. Market efficiency in foreign exchange markets

    Science.gov (United States)

    Oh, Gabjin; Kim, Seunghwan; Eom, Cheoljun

    2007-08-01

    We investigate the relative market efficiency in financial market data, using the approximate entropy(ApEn) method for a quantification of randomness in time series. We used the global foreign exchange market indices for 17 countries during two periods from 1984 to 1998 and from 1999 to 2004 in order to study the efficiency of various foreign exchange markets around the market crisis. We found that on average, the ApEn values for European and North American foreign exchange markets are larger than those for African and Asian ones except Japan. We also found that the ApEn for Asian markets increased significantly after the Asian currency crisis. Our results suggest that the markets with a larger liquidity such as European and North American foreign exchange markets have a higher market efficiency than those with a smaller liquidity such as the African and Asian markets except Japan.

  17. Integer-valued time series

    NARCIS (Netherlands)

    van den Akker, R.

    2007-01-01

    This thesis adresses statistical problems in econometrics. The first part contributes statistical methodology for nonnegative integer-valued time series. The second part of this thesis discusses semiparametric estimation in copula models and develops semiparametric lower bounds for a large class of

  18. Robust Forecasting of Non-Stationary Time Series

    NARCIS (Netherlands)

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable

  19. New Real-Time Market Facilitating Demand-Side Resources for System Balancing

    DEFF Research Database (Denmark)

    Feng, Donghan; Nyeng, Preben; Xie, Jun

    2011-01-01

    Many demand side resources have the potential to provide fast and low cost balancing services. Switching these devices on and off can be executed in seconds and have limited consequences for the customers if the duration is not long. With carefully designed market rules, tens of thousands...... the participation of demand-side resources. In light of the future environment of increasing intermittent renewable power and distributed energy/storage resources, stochastic time-series and Monte-Carlo simulation are used to analyze the relationship between balancing requirement and generation/demand uncertainties...

  20. Investigating the Nonlinear Dynamics of Emerging and Developed Stock Markets

    Directory of Open Access Journals (Sweden)

    K. Guhathakurta

    2015-01-01

    Full Text Available Financial time-series has been of interest of many statisticians and financial experts. Understanding the characteristic features of a financial-time series has posed some difficulties because of its quasi-periodic nature. Linear statistics can be applied to a periodic time series, but since financial time series is non-linear and non-stationary, analysis of its quasi periodic characteristics is not entirely possible with linear statistics. Thus, the study of financial series of stock market still remains a complex task having its specific requirements. In this paper keeping in mind the recent trends and developments in financial time series studies, we want to establish if there is any significant relationship existing between trading behavior of developing and developed markets. The study is conducted to draw conclusions on similarity or differences between developing economies, developed economies, developing-developed economy pairs. We take the leading stock market indices dataset for the past 15 years in those markets to conduct the study. First we have drawn probability distribution of the dataset to see if any graphical similarity exists. Then we perform quantitative techniques to test certain hypotheses. Then we proceed to implement the Ensemble Empirical Mode Distribution technique to draw out amplitude and phase of movement of index value each data set to compare at granular level of detail. Our findings lead us to conclude that the nonlinear dynamics of emerging markets and developed markets are not significantly different. This could mean that increasing cross market trading and involvement of global investment has resulted in narrowing the gap between emerging and developed markets. From nonlinear dynamics perspective we find no reason to distinguish markets into emerging and developed any more.

  1. Characterizing time series via complexity-entropy curves

    Science.gov (United States)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  2. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  3. A Different Statistic for the Management of Portfolios - the Hurst Exponent: Persistent, Antipersistent or Random Time Series?

    Directory of Open Access Journals (Sweden)

    Ana-Maria CALOMFIR (METESCU

    2015-12-01

    Full Text Available In recent years, research in the capital markets and management of portfolios has been producing more questions than it has been answering: the need for a new paradigm or a new way of looking at things has become more and more concludent. The existing and classical view of capital markets, based on efficient market hypothesis, has a definite theory for the last six decades, but it is still not capable of significantly increase the understanding of how capital markets function. The purpose of this article is to theoretically describe a less used statistic coefficient, having a vast area of applicability due to its robustness, and which can easily divide the random series from a non-random series, even if the random series is non-Gaussian: the Hurst exponent.

  4. Complex network approach to fractional time series

    Energy Technology Data Exchange (ETDEWEB)

    Manshour, Pouya [Physics Department, Persian Gulf University, Bushehr 75169 (Iran, Islamic Republic of)

    2015-10-15

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.

  5. Introduction to time series analysis and forecasting

    CERN Document Server

    Montgomery, Douglas C; Kulahci, Murat

    2008-01-01

    An accessible introduction to the most current thinking in and practicality of forecasting techniques in the context of time-oriented data. Analyzing time-oriented data and forecasting are among the most important problems that analysts face across many fields, ranging from finance and economics to production operations and the natural sciences. As a result, there is a widespread need for large groups of people in a variety of fields to understand the basic concepts of time series analysis and forecasting. Introduction to Time Series Analysis and Forecasting presents the time series analysis branch of applied statistics as the underlying methodology for developing practical forecasts, and it also bridges the gap between theory and practice by equipping readers with the tools needed to analyze time-oriented data and construct useful, short- to medium-term, statistically based forecasts.

  6. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.

    Science.gov (United States)

    Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.

  7. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance

    Science.gov (United States)

    Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600

  8. The foundations of modern time series analysis

    CERN Document Server

    Mills, Terence C

    2011-01-01

    This book develops the analysis of Time Series from its formal beginnings in the 1890s through to the publication of Box and Jenkins' watershed publication in 1970, showing how these methods laid the foundations for the modern techniques of Time Series analysis that are in use today.

  9. Time series clustering in large data sets

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2011-01-01

    Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.

  10. Designing carbon markets. Part I: Carbon markets in time

    International Nuclear Information System (INIS)

    Fankhauser, Samuel; Hepburn, Cameron

    2010-01-01

    This paper analyses the design of carbon markets in time (i.e., intertemporally). It is part of a twin set of papers that ask, starting from first principles, what an optimal global carbon market would look like by around 2030. Our focus is on firm-level cap-and-trade systems, although much of what we say would also apply to government-level trading and carbon offset schemes. We examine the 'first principles' of temporal design that would help to maximise flexibility and to minimise costs, including banking and borrowing and other mechanisms to provide greater carbon price predictability and credibility over time.

  11. Time-zero efficiency of European power derivatives markets

    International Nuclear Information System (INIS)

    Peña, Juan Ignacio; Rodriguez, Rosa

    2016-01-01

    We study time-zero efficiency of electricity derivatives markets. By time-zero efficiency is meant a sequence of prices of derivatives contracts having the same underlying asset but different times to maturity which implies that prices comply with a set of efficiency conditions that prevent profitable time-zero arbitrage opportunities. We investigate whether statistical tests, based on the law of one price, and trading rules, based on price differentials and no-arbitrage violations, are useful for assessing time-zero efficiency. We apply tests and trading rules to daily data of three European power markets: Germany, France and Spain. In the case of the German market, after considering liquidity availability and transaction costs, results are not inconsistent with time-zero efficiency. However, in the case of the French and Spanish markets, limitations in liquidity and representativeness are challenges that prevent definite conclusions. Liquidity in French and Spanish markets should improve by using pricing and marketing incentives. These incentives should attract more participants into the electricity derivatives exchanges and should encourage them to settle OTC trades in clearinghouses. Publication of statistics on prices, volumes and open interest per type of participant should be promoted. - Highlights: •We test time-zero efficiency of derivatives power markets in Germany, France and Spain. •Prices in Germany, considering liquidity and transaction costs, are time-zero efficient. •In France and Spain, limitations in liquidity and representativeness prevent conclusions. •Liquidity in France and Spain should improve by using pricing and marketing incentives. •Incentives attract participants to exchanges and encourage them to settle OTC trades in clearinghouses.

  12. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    Science.gov (United States)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  13. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  14. Electricity market price volatility: The case of Ontario

    International Nuclear Information System (INIS)

    Zareipour, Hamidreza; Bhattacharya, Kankar; Canizares, Claudio A.

    2007-01-01

    Price volatility analysis has been reported in the literature for most competitive electricity markets around the world. However, no studies have been published yet that quantify price volatility in the Ontario electricity market, which is the focus of the present paper. In this paper, a comparative volatility analysis is conducted for the Ontario market and its neighboring electricity markets. Volatility indices are developed based on historical volatility and price velocity concepts, previously applied to other electricity market prices, and employed in the present work. The analysis is carried out in two scenarios: in the first scenario, the volatility indices are determined for the entire price time series. In the second scenario, the price time series are broken up into 24 time series for each of the 24 h and volatility indices are calculated for each specific hour separately. The volatility indices are also applied to the locational marginal prices of several pricing points in the New England, New York, and PJM electricity markets. The outcomes reveal that price volatility is significantly higher in Ontario than the three studied neighboring electricity markets. Furthermore, comparison of the results of this study with similar findings previously published for 15 other electricity markets demonstrates that the Ontario electricity market is one of the most volatile electricity markets world-wide. This high volatility is argued to be associated with the fact that Ontario is a single-settlement, real-time market

  15. Real-time Pricing in Power Markets

    DEFF Research Database (Denmark)

    Boom, Anette; Schwenen, Sebastian

    We examine welfare e ects of real-time pricing in electricity markets. Before stochastic energy demand is known, competitive retailers contract with nal consumers who exogenously do not have real-time meters. After demand is realized, two electricity generators compete in a uniform price auction...... to satisfy demand from retailers acting on behalf of subscribed customers and from consumers with real-time meters. Increasing the number of consumers on real-time pricing does not always increase welfare since risk-averse consumers dislike uncertain and high prices arising through market power...

  16. Real-time Pricing in Power Markets

    DEFF Research Database (Denmark)

    Boom, Anette; Schwenen, Sebastian

    We examine welfare eects of real-time pricing in electricity markets. Before stochastic energy demand is known, competitive retailers contract with nal consumers who exogenously do not have real-time meters. After demand is realized, two electricity generators compete in a uniform price auction...... to satisfy demand from retailers acting on behalf of subscribed customers and from consumers with real-time meters. Increasing the number of consumers on real-time pricing does not always increase welfare since risk-averse consumers dislike uncertain and high prices arising through market power...

  17. Time-series prediction and applications a machine intelligence approach

    CERN Document Server

    Konar, Amit

    2017-01-01

    This book presents machine learning and type-2 fuzzy sets for the prediction of time-series with a particular focus on business forecasting applications. It also proposes new uncertainty management techniques in an economic time-series using type-2 fuzzy sets for prediction of the time-series at a given time point from its preceding value in fluctuating business environments. It employs machine learning to determine repetitively occurring similar structural patterns in the time-series and uses stochastic automaton to predict the most probabilistic structure at a given partition of the time-series. Such predictions help in determining probabilistic moves in a stock index time-series Primarily written for graduate students and researchers in computer science, the book is equally useful for researchers/professionals in business intelligence and stock index prediction. A background of undergraduate level mathematics is presumed, although not mandatory, for most of the sections. Exercises with tips are provided at...

  18. A Time Series Forecasting Method

    Directory of Open Access Journals (Sweden)

    Wang Zhao-Yu

    2017-01-01

    Full Text Available This paper proposes a novel time series forecasting method based on a weighted self-constructing clustering technique. The weighted self-constructing clustering processes all the data patterns incrementally. If a data pattern is not similar enough to an existing cluster, it forms a new cluster of its own. However, if a data pattern is similar enough to an existing cluster, it is removed from the cluster it currently belongs to and added to the most similar cluster. During the clustering process, weights are learned for each cluster. Given a series of time-stamped data up to time t, we divide it into a set of training patterns. By using the weighted self-constructing clustering, the training patterns are grouped into a set of clusters. To estimate the value at time t + 1, we find the k nearest neighbors of the input pattern and use these k neighbors to decide the estimation. Experimental results are shown to demonstrate the effectiveness of the proposed approach.

  19. Multifractals embedded in short time series: An unbiased estimation of probability moment

    Science.gov (United States)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  20. Stochastic nature of series of waiting times

    Science.gov (United States)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2time distribution. We find that the logarithmic difference of waiting times series has a short-range correlation, and then we study its stochastic nature using the Markovian method and determine the corresponding Kramers-Moyal coefficients. As an example, we analyze the velocity fluctuations in high Reynolds number turbulence and determine the level dependence of Markov time scales, as well as the drift and diffusion coefficients. We show that the waiting time distributions exhibit power law tails, and we were able to model the distribution with a continuous time random walk.

  1. Efficient Approximate OLAP Querying Over Time Series

    DEFF Research Database (Denmark)

    Perera, Kasun Baruhupolage Don Kasun Sanjeewa; Hahmann, Martin; Lehner, Wolfgang

    2016-01-01

    The ongoing trend for data gathering not only produces larger volumes of data, but also increases the variety of recorded data types. Out of these, especially time series, e.g. various sensor readings, have attracted attention in the domains of business intelligence and decision making. As OLAP...... queries play a major role in these domains, it is desirable to also execute them on time series data. While this is not a problem on the conceptual level, it can become a bottleneck with regards to query run-time. In general, processing OLAP queries gets more computationally intensive as the volume...... of data grows. This is a particular problem when querying time series data, which generally contains multiple measures recorded at fine time granularities. Usually, this issue is addressed either by scaling up hardware or by employing workload based query optimization techniques. However, these solutions...

  2. Timing Foreign Exchange Markets

    Directory of Open Access Journals (Sweden)

    Samuel W. Malone

    2016-03-01

    Full Text Available To improve short-horizon exchange rate forecasts, we employ foreign exchange market risk factors as fundamentals, and Bayesian treed Gaussian process (BTGP models to handle non-linear, time-varying relationships between these fundamentals and exchange rates. Forecasts from the BTGP model conditional on the carry and dollar factors dominate random walk forecasts on accuracy and economic criteria in the Meese-Rogoff setting. Superior market timing ability for large moves, more than directional accuracy, drives the BTGP’s success. We explain how, through a model averaging Monte Carlo scheme, the BTGP is able to simultaneously exploit smoothness and rough breaks in between-variable dynamics. Either feature in isolation is unable to consistently outperform benchmarks throughout the full span of time in our forecasting exercises. Trading strategies based on ex ante BTGP forecasts deliver the highest out-of-sample risk-adjusted returns for the median currency, as well as for both predictable, traded risk factors.

  3. A Dynamic Fuzzy Cluster Algorithm for Time Series

    Directory of Open Access Journals (Sweden)

    Min Ji

    2013-01-01

    clustering time series by introducing the definition of key point and improving FCM algorithm. The proposed algorithm works by determining those time series whose class labels are vague and further partitions them into different clusters over time. The main advantage of this approach compared with other existing algorithms is that the property of some time series belonging to different clusters over time can be partially revealed. Results from simulation-based experiments on geographical data demonstrate the excellent performance and the desired results have been obtained. The proposed algorithm can be applied to solve other clustering problems in data mining.

  4. Unraveling chaotic attractors by complex networks and measurements of stock market complexity

    International Nuclear Information System (INIS)

    Cao, Hongduo; Li, Ying

    2014-01-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel–Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process

  5. Unraveling chaotic attractors by complex networks and measurements of stock market complexity.

    Science.gov (United States)

    Cao, Hongduo; Li, Ying

    2014-03-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel-Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.

  6. A novel weight determination method for time series data aggregation

    Science.gov (United States)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  7. Foundations of Sequence-to-Sequence Modeling for Time Series

    OpenAIRE

    Kuznetsov, Vitaly; Mariet, Zelda

    2018-01-01

    The availability of large amounts of time series data, paired with the performance of deep-learning algorithms on a broad class of problems, has recently led to significant interest in the use of sequence-to-sequence models for time series forecasting. We provide the first theoretical analysis of this time series forecasting framework. We include a comparison of sequence-to-sequence modeling to classical time series models, and as such our theory can serve as a quantitative guide for practiti...

  8. Climate Prediction Center (CPC) Global Precipitation Time Series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The global precipitation time series provides time series charts showing observations of daily precipitation as well as accumulated precipitation compared to normal...

  9. Climate Prediction Center (CPC) Global Temperature Time Series

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The global temperature time series provides time series charts using station based observations of daily temperature. These charts provide information about the...

  10. Relating interesting quantitative time series patterns with text events and text features

    Science.gov (United States)

    Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.

    2013-12-01

    In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other

  11. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  12. Transition Icons for Time-Series Visualization and Exploratory Analysis.

    Science.gov (United States)

    Nickerson, Paul V; Baharloo, Raheleh; Wanigatunga, Amal A; Manini, Todd M; Tighe, Patrick J; Rashidi, Parisa

    2018-03-01

    The modern healthcare landscape has seen the rapid emergence of techniques and devices that temporally monitor and record physiological signals. The prevalence of time-series data within the healthcare field necessitates the development of methods that can analyze the data in order to draw meaningful conclusions. Time-series behavior is notoriously difficult to intuitively understand due to its intrinsic high-dimensionality, which is compounded in the case of analyzing groups of time series collected from different patients. Our framework, which we call transition icons, renders common patterns in a visual format useful for understanding the shared behavior within groups of time series. Transition icons are adept at detecting and displaying subtle differences and similarities, e.g., between measurements taken from patients receiving different treatment strategies or stratified by demographics. We introduce various methods that collectively allow for exploratory analysis of groups of time series, while being free of distribution assumptions and including simple heuristics for parameter determination. Our technique extracts discrete transition patterns from symbolic aggregate approXimation representations, and compiles transition frequencies into a bag of patterns constructed for each group. These transition frequencies are normalized and aligned in icon form to intuitively display the underlying patterns. We demonstrate the transition icon technique for two time-series datasets-postoperative pain scores, and hip-worn accelerometer activity counts. We believe transition icons can be an important tool for researchers approaching time-series data, as they give rich and intuitive information about collective time-series behaviors.

  13. Multifractal analysis of visibility graph-based Ito-related connectivity time series.

    Science.gov (United States)

    Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano

    2016-02-01

    In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.

  14. Estimating serial correlation and self-similarity in financial time series-A diversification approach with applications to high frequency data

    Science.gov (United States)

    Gerlich, Nikolas; Rostek, Stefan

    2015-09-01

    We derive a heuristic method to estimate the degree of self-similarity and serial correlation in financial time series. Especially, we propagate the use of a tailor-made selection of different estimation techniques that are used in various fields of time series analysis but until now have not consequently found their way into the finance literature. Following the idea of portfolio diversification, we show that considerable improvements with respect to robustness and unbiasedness can be achieved by using a basket of estimation methods. With this methodological toolbox at hand, we investigate real market data to show that noticeable deviations from the assumptions of constant self-similarity and absence of serial correlation occur during certain periods. On the one hand, this may shed a new light on seemingly ambiguous scientific findings concerning serial correlation of financial time series. On the other hand, a proven time-changing degree of self-similarity may help to explain high-volatility clusters of stock price indices.

  15. Mathematical foundations of time series analysis a concise introduction

    CERN Document Server

    Beran, Jan

    2017-01-01

    This book provides a concise introduction to the mathematical foundations of time series analysis, with an emphasis on mathematical clarity. The text is reduced to the essential logical core, mostly using the symbolic language of mathematics, thus enabling readers to very quickly grasp the essential reasoning behind time series analysis. It appeals to anybody wanting to understand time series in a precise, mathematical manner. It is suitable for graduate courses in time series analysis but is equally useful as a reference work for students and researchers alike.

  16. Time series analysis in the social sciences the fundamentals

    CERN Document Server

    Shin, Youseop

    2017-01-01

    Times Series Analysis in the Social Sciences is a practical and highly readable introduction written exclusively for students and researchers whose mathematical background is limited to basic algebra. The book focuses on fundamental elements of time series analysis that social scientists need to understand so they can employ time series analysis for their research and practice. Through step-by-step explanations and using monthly violent crime rates as case studies, this book explains univariate time series from the preliminary visual analysis through the modeling of seasonality, trends, and re

  17. Data imputation analysis for Cosmic Rays time series

    Science.gov (United States)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  18. Linking market interaction intensity of 3D Ising type financial model with market volatility

    Science.gov (United States)

    Fang, Wen; Ke, Jinchuan; Wang, Jun; Feng, Ling

    2016-11-01

    Microscopic interaction models in physics have been used to investigate the complex phenomena of economic systems. The simple interactions involved can lead to complex behaviors and help the understanding of mechanisms in the financial market at a systemic level. This article aims to develop a financial time series model through 3D (three-dimensional) Ising dynamic system which is widely used as an interacting spins model to explain the ferromagnetism in physics. Through Monte Carlo simulations of the financial model and numerical analysis for both the simulation return time series and historical return data of Hushen 300 (HS300) index in Chinese stock market, we show that despite its simplicity, this model displays stylized facts similar to that seen in real financial market. We demonstrate a possible underlying link between volatility fluctuations of real stock market and the change in interaction strengths of market participants in the financial model. In particular, our stochastic interaction strength in our model demonstrates that the real market may be consistently operating near the critical point of the system.

  19. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  20. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  1. Forecasting electric vehicles sales with univariate and multivariate time series models: The case of China.

    Science.gov (United States)

    Zhang, Yong; Zhong, Miner; Geng, Nana; Jiang, Yunjian

    2017-01-01

    The market demand for electric vehicles (EVs) has increased in recent years. Suitable models are necessary to understand and forecast EV sales. This study presents a singular spectrum analysis (SSA) as a univariate time-series model and vector autoregressive model (VAR) as a multivariate model. Empirical results suggest that SSA satisfactorily indicates the evolving trend and provides reasonable results. The VAR model, which comprised exogenous parameters related to the market on a monthly basis, can significantly improve the prediction accuracy. The EV sales in China, which are categorized into battery and plug-in EVs, are predicted in both short term (up to December 2017) and long term (up to 2020), as statistical proofs of the growth of the Chinese EV industry.

  2. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  3. Eat, drink and gamble: marketing messages about ‘risky’ products in an Australian major sporting series

    Science.gov (United States)

    2013-01-01

    Background To investigate the alcohol, gambling, and unhealthy food marketing strategies during a nationally televised, free to air, sporting series in Australia. Methods/approach Using the Australian National Rugby League 2012 State of Origin three-game series, we conducted a mixed methods content analysis of the frequency, duration, placement and content of advertising strategies, comparing these strategies both within and across the three games. Results There were a total of 4445 episodes (mean = 1481.67, SD = 336.58), and 233.23 minutes (mean = 77.74, SD = 7.31) of marketing for alcoholic beverages, gambling products and unhealthy foods and non-alcoholic beverages during the 360 minutes of televised coverage of the three State of Origin 2012 games. This included an average per game of 1354 episodes (SD = 368.79) and 66.29 minutes (SD = 7.62) of alcohol marketing; 110.67 episodes (SD = 43.89), and 8.72 minutes (SD = 1.29) of gambling marketing; and 17 episodes (SD = 7.55), and 2.74 minutes (SD = 0.78) of unhealthy food and beverage marketing. Content analysis revealed that there was a considerable embedding of product marketing within the match play, including within match commentary, sporting equipment, and special replays. Conclusions Sport is increasingly used as a vehicle for the promotion of range of ‘risky consumption’ products. This study raises important ethical and health policy questions about the extent and impact of saturation and incidental marketing strategies on health and wellbeing, the transparency of embedded marketing strategies, and how these strategies may influence product consumption. PMID:23914917

  4. Serious adverse events after HPV vaccination: a critical review of randomized trials and post-marketing case series.

    Science.gov (United States)

    Martínez-Lavín, Manuel; Amezcua-Guerra, Luis

    2017-10-01

    This article critically reviews HPV vaccine serious adverse events described in pre-licensure randomized trials and in post-marketing case series. HPV vaccine randomized trials were identified in PubMed. Safety data were extracted. Post-marketing case series describing HPV immunization adverse events were reviewed. Most HPV vaccine randomized trials did not use inert placebo in the control group. Two of the largest randomized trials found significantly more severe adverse events in the tested HPV vaccine arm of the study. Compared to 2871 women receiving aluminum placebo, the group of 2881 women injected with the bivalent HPV vaccine had more deaths on follow-up (14 vs. 3, p = 0.012). Compared to 7078 girls injected with the 4-valent HPV vaccine, 7071 girls receiving the 9-valent dose had more serious systemic adverse events (3.3 vs. 2.6%, p = 0.01). For the 9-valent dose, our calculated number needed to seriously harm is 140 (95% CI, 79–653) [DOSAGE ERROR CORRECTED] . The number needed to vaccinate is 1757 (95% CI, 131 to infinity). Practically, none of the serious adverse events occurring in any arm of both studies were judged to be vaccine-related. Pre-clinical trials, post-marketing case series, and the global drug adverse reaction database (VigiBase) describe similar post-HPV immunization symptom clusters. Two of the largest randomized HPV vaccine trials unveiled more severe adverse events in the tested HPV vaccine arm of the study. Nine-valent HPV vaccine has a worrisome number needed to vaccinate/number needed to harm quotient. Pre-clinical trials and post-marketing case series describe similar post-HPV immunization symptoms.

  5. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Eat, drink and gamble: marketing messages about ?risky? products in an Australian major sporting series

    OpenAIRE

    Lindsay, Sophie; Thomas, Samantha; Lewis, Sophie; Westberg, Kate; Moodie, Rob; Jones, Sandra

    2013-01-01

    Background To investigate the alcohol, gambling, and unhealthy food marketing strategies during a nationally televised, free to air, sporting series in Australia. Methods/approach Using the Australian National Rugby League 2012 State of Origin three-game series, we conducted a mixed methods content analysis of the frequency, duration, placement and content of advertising strategies, comparing these strategies both within and across the three games. Results There were a total of 4445 episodes ...

  7. Dependency structure and scaling properties of financial time series are related.

    Science.gov (United States)

    Morales, Raffaello; Di Matteo, T; Aste, Tomaso

    2014-04-04

    We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of cross-correlations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between cross-correlation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the cross-correlation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate cross-correlation properties of financial time series.

  8. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  9. DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION

    OpenAIRE

    Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain

    2017-01-01

    International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...

  10. Price dynamics among U.S. electricity spot markets

    International Nuclear Information System (INIS)

    Park, Haesun; Mjelde, James W.; Bessler, David A.

    2006-01-01

    Combining recent advances in causal flows with time series analysis, relationships among 11 U.S. spot market electricity prices are examined. Results suggest that the relationships among the markets vary by time frame. In contemporaneous time, the western markets are separated from the eastern markets and the Electricity Reliability Council of Texas. At longer time frames these separations disappear, even though electricity transmission between the regions is limited. It appears the relationships among markets are not only a function of physical assets (such as transmissions lines among markets), but by similar and dissimilar institutional arrangements among the markets. (Author)

  11. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  12. Trend time-series modeling and forecasting with neural networks.

    Science.gov (United States)

    Qi, Min; Zhang, G Peter

    2008-05-01

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.

  13. Fractal markets: Liquidity and investors on different time horizons

    Science.gov (United States)

    Li, Da-Ye; Nishimura, Yusaku; Men, Ming

    2014-08-01

    In this paper, we propose a new agent-based model to study the source of liquidity and the “emergent” phenomenon in financial market with fractal structure. The model rests on fractal market hypothesis and agents with different time horizons of investments. What is interesting is that though the agent-based model reveals that the interaction between these heterogeneous agents affects the stability and liquidity of the financial market the real world market lacks detailed data to bring it to light since it is difficult to identify and distinguish the investors with different time horizons in the empirical approach. results show that in a relatively short period of time fractal market provides liquidity from investors with different horizons and the market gains stability when the market structure changes from uniformity to diversification. In the real world the fractal structure with the finite of horizons can only stabilize the market within limits. With the finite maximum horizons, the greater diversity of the investors and the fractal structure will not necessarily bring more stability to the market which might come with greater fluctuation in large time scale.

  14. Price determinants of the European carbon market and interactions with energy markets

    Energy Technology Data Exchange (ETDEWEB)

    Schumacher, Katja; Cludius, Johanna; Matthes, Felix [Oeko Institut e.V., Berlin (Germany); Diekmann, Jochen; Zaklan, Aleksandar [Deutsches Institut fuer Wirtschaftsforschung, Berlin (Germany); Schleich, Joachim [Fraunhofer-Institut fuer Systemtechnik und Innovationsforschung (ISI), Karlsruhe (Germany)

    2012-06-15

    This report explores the determinants of short run price movements in the carbon market and their interaction with energy markets, in particular with the electricity market. Focusing on Phase 2 of the EU ETS we conduct econometric time series analysis based on continental EU and UK market data. Our findings suggest that market fundamentals have a dominant effect on the EUA price, but that non-fundamental factors may also play a role. We further found that the electricity price has a significant positive impact on the carbon price in the short run.

  15. Segmentation of Nonstationary Time Series with Geometric Clustering

    DEFF Research Database (Denmark)

    Bocharov, Alexei; Thiesson, Bo

    2013-01-01

    We introduce a non-parametric method for segmentation in regimeswitching time-series models. The approach is based on spectral clustering of target-regressor tuples and derives a switching regression tree, where regime switches are modeled by oblique splits. Such models can be learned efficiently...... from data, where clustering is used to propose one single split candidate at each split level. We use the class of ART time series models to serve as illustration, but because of the non-parametric nature of our segmentation approach, it readily generalizes to a wide range of time-series models that go...

  16. Non-parametric characterization of long-term rainfall time series

    Science.gov (United States)

    Tiwari, Harinarayan; Pandey, Brij Kishor

    2018-03-01

    The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.

  17. Time Series Decomposition into Oscillation Components and Phase Estimation.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  18. Introduction to time series analysis and forecasting

    CERN Document Server

    Montgomery, Douglas C; Kulahci, Murat

    2015-01-01

    Praise for the First Edition ""…[t]he book is great for readers who need to apply the methods and models presented but have little background in mathematics and statistics."" -MAA Reviews Thoroughly updated throughout, Introduction to Time Series Analysis and Forecasting, Second Edition presents the underlying theories of time series analysis that are needed to analyze time-oriented data and construct real-world short- to medium-term statistical forecasts.    Authored by highly-experienced academics and professionals in engineering statistics, the Second Edition features discussions on both

  19. Multi-Scale Dissemination of Time Series Data

    DEFF Research Database (Denmark)

    Guo, Qingsong; Zhou, Yongluan; Su, Li

    2013-01-01

    In this paper, we consider the problem of continuous dissemination of time series data, such as sensor measurements, to a large number of subscribers. These subscribers fall into multiple subscription levels, where each subscription level is specified by the bandwidth constraint of a subscriber......, which is an abstract indicator for both the physical limits and the amount of data that the subscriber would like to handle. To handle this problem, we propose a system framework for multi-scale time series data dissemination that employs a typical tree-based dissemination network and existing time...

  20. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    Science.gov (United States)

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Similarity estimators for irregular and age uncertain time series

    Science.gov (United States)

    Rehfeld, K.; Kurths, J.

    2013-09-01

    Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many datasets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age uncertain time series. We compare the Gaussian-kernel based cross correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity

  2. Similarity estimators for irregular and age-uncertain time series

    Science.gov (United States)

    Rehfeld, K.; Kurths, J.

    2014-01-01

    Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many data sets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age-uncertain time series. We compare the Gaussian-kernel-based cross-correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case, coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity

  3. Insurance market penetration and economic growth in Eurozone countries: Time series evidence on causality

    Directory of Open Access Journals (Sweden)

    Saurav Dash

    2018-06-01

    Full Text Available This paper examines the causal relationship between insurance market penetration and per capita economic growth in 19 Eurozone countries for the period 1980–2014. We use three different indicators of insurance market penetration (IMP, namely life insurance penetration, non-life insurance penetration, and total (both life and non-life insurance penetration. We particularly emphasize on whether Granger causality exists between these variables both ways, one way, or not at all. Our empirical results perceive both unidirectional and bidirectional causality between IMP and per capita economic growth. However, these results are mostly non-uniform across the Eurozone countries during this selected period. The policy implication is that the economic policies should recognize the differences in the insurance market and per capita economic growth in order to maintain sustainable growth in the Eurozone. Keywords: IMP, Per capita economic growth, Granger causality, Eurozone countries, JEL codes: L96, O32, O33, O43

  4. Robust Forecasting of Non-Stationary Time Series

    OpenAIRE

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estima...

  5. Direct Mail Marketing for Cooperative Education. Cooperative Education Marketing Digest Series 5.

    Science.gov (United States)

    McGookey, Kathy

    Seven guidelines for enhancing direct mail marketing are as follows: target the most promising audience; frame the right message for the audience; state the benefits of making a positive response; send the message at an appropriate time; tell the reader what response is desired; plan follow-up mailings or other contact; and measure results.…

  6. Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.

    Directory of Open Access Journals (Sweden)

    Muhammad Imtiaz Subhani

    2016-12-01

    Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.

  7. Time Series Econometrics for the 21st Century

    Science.gov (United States)

    Hansen, Bruce E.

    2017-01-01

    The field of econometrics largely started with time series analysis because many early datasets were time-series macroeconomic data. As the field developed, more cross-sectional and longitudinal datasets were collected, which today dominate the majority of academic empirical research. In nonacademic (private sector, central bank, and governmental)…

  8. Effectiveness of firefly algorithm based neural network in time series ...

    African Journals Online (AJOL)

    Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...

  9. Time Series Analysis of Insar Data: Methods and Trends

    Science.gov (United States)

    Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique

    2015-01-01

    Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.

  10. Visibility graph network analysis of natural gas price: The case of North American market

    Science.gov (United States)

    Sun, Mei; Wang, Yaqi; Gao, Cuixia

    2016-11-01

    Fluctuations in prices of natural gas significantly affect global economy. Therefore, the research on the characteristics of natural gas price fluctuations, turning points and its influencing cycle on the subsequent price series is of great significance. Global natural gas trade concentrates on three regional markets: the North American market, the European market and the Asia-Pacific market, with North America having the most developed natural gas financial market. In addition, perfect legal supervision and coordinated regulations make the North American market more open and more competitive. This paper focuses on the North American natural gas market specifically. The Henry Hub natural gas spot price time series is converted to a visibility graph network which provides a new direction for macro analysis of time series, and several indicators are investigated: degree and degree distribution, the average shortest path length and community structure. The internal mechanisms underlying price fluctuations are explored through the indicators. The results show that the natural gas prices visibility graph network (NGP-VGN) is of small-world and scale-free properties simultaneously. After random rearrangement of original price time series, the degree distribution of network becomes exponential distribution, different from the original ones. This means that, the original price time series is of long-range negative correlation fractal characteristic. In addition, nodes with large degree correspond to significant geopolitical or economic events. Communities correspond to time cycles in visibility graph network. The cycles of time series and the impact scope of hubs can be found by community structure partition.

  11. Interpretation of a compositional time series

    Science.gov (United States)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA

  12. The persistence of marketing effects on sales

    OpenAIRE

    Dekimpe, Marnik; Hanssens, DM

    1993-01-01

    Are marketing efforts able to affect long-term trends in sales or other performance measures? Answering this question is essential for the creation of marketing strategies that deliver a sustainable competitive advantage. This paper introduces persistence modeling to derive long-term marketing effectiveness from time-series observations on sales and marketing expenditures. First, we use unit-root tests to determine whether sales are stable or evolving (trending) over time. If they are evolvin...

  13. Capturing Structure Implicitly from Time-Series having Limited Data

    OpenAIRE

    Emaasit, Daniel; Johnson, Matthew

    2018-01-01

    Scientific fields such as insider-threat detection and highway-safety planning often lack sufficient amounts of time-series data to estimate statistical models for the purpose of scientific discovery. Moreover, the available limited data are quite noisy. This presents a major challenge when estimating time-series models that are robust to overfitting and have well-calibrated uncertainty estimates. Most of the current literature in these fields involve visualizing the time-series for noticeabl...

  14. On Transaction-Cost Models in Continuous-Time Markets

    Directory of Open Access Journals (Sweden)

    Thomas Poufinas

    2015-04-01

    Full Text Available Transaction-cost models in continuous-time markets are considered. Given that investors decide to buy or sell at certain time instants, we study the existence of trading strategies that reach a certain final wealth level in continuous-time markets, under the assumption that transaction costs, built in certain recommended ways, have to be paid. Markets prove to behave in manners that resemble those of complete ones for a wide variety of transaction-cost types. The results are important, but not exclusively, for the pricing of options with transaction costs.

  15. Time-dependent correlations in electricity markets

    International Nuclear Information System (INIS)

    Alvarez-Ramirez, Jose; Escarela-Perez, Rafael

    2010-01-01

    In the last years, many electricity markets were subjected to deregulated operation where prices are set by the action of market participants. In this form, producers and consumers rely on demand and price forecasts to decide their bidding strategies, allocate assets, negotiate bilateral contracts, hedge risks, and plan facility investments. A basic feature of efficient market hypothesis is the absence of correlations between price increments over any time scale leading to random walk-type behavior of prices, so arbitrage is not possible. However, recent studies have suggested that this is not the case and correlations are present in the behavior of diverse electricity markets. In this paper, a temporal quantification of electricity market correlations is made by means of detrended fluctuation and Allan analyses. The approach is applied to two Canadian electricity markets, Ontario and Alberta. The results show the existence of correlations in both demand and prices, exhibiting complex time-dependent behavior with lower correlations in winter while higher in summer. Relatively steady annual cycles in demand but unstable cycles in prices are detected. On the other hand, the more significant nonlinear effects (measured in terms of a multifractality index) are found for winter months, while the converse behavior is displayed during the summer period. In terms of forecasting models, our results suggest that nonlinear recursive models (e.g., feedback NNs) should be used for accurate day-ahead price estimation. In contrast, linear models can suffice for demand forecasting purposes. (author)

  16. Self-affinity in the dengue fever time series

    Science.gov (United States)

    Azevedo, S. M.; Saba, H.; Miranda, J. G. V.; Filho, A. S. Nascimento; Moret, M. A.

    2016-06-01

    Dengue is a complex public health problem that is common in tropical and subtropical regions. This disease has risen substantially in the last three decades, and the physical symptoms depict the self-affine behavior of the occurrences of reported dengue cases in Bahia, Brazil. This study uses detrended fluctuation analysis (DFA) to verify the scale behavior in a time series of dengue cases and to evaluate the long-range correlations that are characterized by the power law α exponent for different cities in Bahia, Brazil. The scaling exponent (α) presents different long-range correlations, i.e. uncorrelated, anti-persistent, persistent and diffusive behaviors. The long-range correlations highlight the complex behavior of the time series of this disease. The findings show that there are two distinct types of scale behavior. In the first behavior, the time series presents a persistent α exponent for a one-month period. For large periods, the time series signal approaches subdiffusive behavior. The hypothesis of the long-range correlations in the time series of the occurrences of reported dengue cases was validated. The observed self-affinity is useful as a forecasting tool for future periods through extrapolation of the α exponent behavior. This complex system has a higher predictability in a relatively short time (approximately one month), and it suggests a new tool in epidemiological control strategies. However, predictions for large periods using DFA are hidden by the subdiffusive behavior.

  17. On the plurality of times: disunified time and the A-series | Nefdt ...

    African Journals Online (AJOL)

    Then, I attempt to show that disunified time is a problem for a semantics based on the A-series since A-truthmakers are hard to come by in a universe of temporally disconnected time-series. Finally, I provide a novel argument showing that presentists should be particularly fearful of such a universe. South African Journal of ...

  18. Time-series modeling of long-term weight self-monitoring data.

    Science.gov (United States)

    Helander, Elina; Pavel, Misha; Jimison, Holly; Korhonen, Ilkka

    2015-08-01

    Long-term self-monitoring of weight is beneficial for weight maintenance, especially after weight loss. Connected weight scales accumulate time series information over long term and hence enable time series analysis of the data. The analysis can reveal individual patterns, provide more sensitive detection of significant weight trends, and enable more accurate and timely prediction of weight outcomes. However, long term self-weighing data has several challenges which complicate the analysis. Especially, irregular sampling, missing data, and existence of periodic (e.g. diurnal and weekly) patterns are common. In this study, we apply time series modeling approach on daily weight time series from two individuals and describe information that can be extracted from this kind of data. We study the properties of weight time series data, missing data and its link to individuals behavior, periodic patterns and weight series segmentation. Being able to understand behavior through weight data and give relevant feedback is desired to lead to positive intervention on health behaviors.

  19. Time series prediction of apple scab using meteorological ...

    African Journals Online (AJOL)

    A new prediction model for the early warning of apple scab is proposed in this study. The method is based on artificial intelligence and time series prediction. The infection period of apple scab was evaluated as the time series prediction model instead of summation of wetness duration. Also, the relations of different ...

  20. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  1. Characterization of time series via Rényi complexity-entropy curves

    Science.gov (United States)

    Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.

    2018-05-01

    One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.

  2. SHRINKING THE UNCERTAINTY IN ONLINE SALES PREDICTION WITH TIME SERIES ANALYSIS

    Directory of Open Access Journals (Sweden)

    Rashmi Ranjan Dhal

    2014-10-01

    Full Text Available In any production environment, processing is centered on the manufacture of products. It is important to get adequate volumes of orders for those products. However, merely getting orders is not enough for the long-term sustainability of multinationals. They need to know the demand for their products well in advance in order to compete and win in a highly competitive market. To assess the demand of a product we need to track its order behavior and predict the future response of customers depending on the present dataset as well as historical dataset. In this paper we propose a systematic, time-series based scheme to perform this task using the Hadoop framework and Holt-Winter prediction function in the R environment to show the sales forecast for forthcoming years.

  3. Quantifying Selection with Pool-Seq Time Series Data.

    Science.gov (United States)

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  5. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  6. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  7. Modeling financial time series with S-plus

    CERN Document Server

    Zivot, Eric

    2003-01-01

    The field of financial econometrics has exploded over the last decade This book represents an integration of theory, methods, and examples using the S-PLUS statistical modeling language and the S+FinMetrics module to facilitate the practice of financial econometrics This is the first book to show the power of S-PLUS for the analysis of time series data It is written for researchers and practitioners in the finance industry, academic researchers in economics and finance, and advanced MBA and graduate students in economics and finance Readers are assumed to have a basic knowledge of S-PLUS and a solid grounding in basic statistics and time series concepts Eric Zivot is an associate professor and Gary Waterman Distinguished Scholar in the Economics Department at the University of Washington, and is co-director of the nascent Professional Master's Program in Computational Finance He regularly teaches courses on econometric theory, financial econometrics and time series econometrics, and is the recipient of the He...

  8. Nonlinear analysis and dynamic structure in the energy market

    Science.gov (United States)

    Aghababa, Hajar

    This research assesses the dynamic structure of the energy sector of the aggregate economy in the context of nonlinear mechanisms. Earlier studies have focused mainly on the price of the energy products when detecting nonlinearities in time series data of the energy market, and there is little mention of the production side of the market. Moreover, there is a lack of exploration about the implication of high dimensionality and time aggregation when analyzing the market's fundamentals. This research will address these gaps by including the quantity side of the market in addition to the price and by systematically incorporating various frequencies for sample sizes in three essays. The goal of this research is to provide an inclusive and exhaustive examination of the dynamics in the energy markets. The first essay begins with the application of statistical techniques, and it incorporates the most well-known univariate tests for nonlinearity with distinct power functions over alternatives and tests different null hypotheses. It utilizes the daily spot price observations on five major products in the energy market. The results suggest that the time series daily spot prices of the energy products are highly nonlinear in their nature. They demonstrate apparent evidence of general nonlinear serial dependence in each individual series, as well as nonlinearity in the first, second, and third moments of the series. The second essay examines the underlying mechanism of crude oil production and identifies the nonlinear structure of the production market by utilizing various monthly time series observations of crude oil production: the U.S. field, Organization of the Petroleum Exporting Countries (OPEC), non-OPEC, and the world production of crude oil. The finding implies that the time series data of the U.S. field, OPEC, and the world production of crude oil exhibit deep nonlinearity in their structure and are generated by nonlinear mechanisms. However, the dynamics of the non

  9. Methods for Detecting Early Warnings of Critical Transitions in Time Series Illustrated Using Simulated Ecological Data

    Science.gov (United States)

    Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten

    2012-01-01

    Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897

  10. Application of Time Series Analysis in Determination of Lag Time in Jahanbin Basin

    Directory of Open Access Journals (Sweden)

    Seied Yahya Mirzaee

    2005-11-01

        One of the important issues that have significant role in study of hydrology of basin is determination of lag time. Lag time has significant role in hydrological studies. Quantity of rainfall related lag time depends on several factors, such as permeability, vegetation cover, catchments slope, rainfall intensity, storm duration and type of rain. Determination of lag time is important parameter in many projects such as dam design and also water resource studies. Lag time of basin could be calculated using various methods. One of these methods is time series analysis of spectral density. The analysis is based on fouries series. The time series is approximated with Sinuous and Cosines functions. In this method harmonically significant quantities with individual frequencies are presented. Spectral density under multiple time series could be used to obtain basin lag time for annual runoff and short-term rainfall fluctuation. A long lag time could be due to snowmelt as well as melting ice due to rainfalls in freezing days. In this research the lag time of Jahanbin basin has been determined using spectral density method. The catchments is subjected to both rainfall and snowfall. For short term rainfall fluctuation with a return period  2, 3, 4 months, the lag times were found 0.18, 0.5 and 0.083 month, respectively.

  11. Modeling Time Series Data for Supervised Learning

    Science.gov (United States)

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  12. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    Science.gov (United States)

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  13. Dynamics of electricity market correlations

    Science.gov (United States)

    Alvarez-Ramirez, J.; Escarela-Perez, R.; Espinosa-Perez, G.; Urrea, R.

    2009-06-01

    Electricity market participants rely on demand and price forecasts to decide their bidding strategies, allocate assets, negotiate bilateral contracts, hedge risks, and plan facility investments. However, forecasting is hampered by the non-linear and stochastic nature of price time series. Diverse modeling strategies, from neural networks to traditional transfer functions, have been explored. These approaches are based on the assumption that price series contain correlations that can be exploited for model-based prediction purposes. While many works have been devoted to the demand and price modeling, a limited number of reports on the nature and dynamics of electricity market correlations are available. This paper uses detrended fluctuation analysis to study correlations in the demand and price time series and takes the Australian market as a case study. The results show the existence of correlations in both demand and prices over three orders of magnitude in time ranging from hours to months. However, the Hurst exponent is not constant over time, and its time evolution was computed over a subsample moving window of 250 observations. The computations, also made for two Canadian markets, show that the correlations present important fluctuations over a seasonal one-year cycle. Interestingly, non-linearities (measured in terms of a multifractality index) and reduced price predictability are found for the June-July periods, while the converse behavior is displayed during the December-January period. In terms of forecasting models, our results suggest that non-linear recursive models should be considered for accurate day-ahead price estimation. On the other hand, linear models seem to suffice for demand forecasting purposes.

  14. Marketing time-of-day rates to the residential market

    International Nuclear Information System (INIS)

    Spangler, W.E.

    1990-01-01

    The Metropolitan Edison Company in Pennsylvania began to promote load management and conservation in the early 1970s. In 1976, time-of-day rates were introduced as a strategy to aid in providing adequate supply at a price which could sustain demand. At first, it was offered on a trial basis to a limited number of customers. The on-peak period was 8 a.m. to 8 p.m. weekdays, and if the customer used 60% or more of his electric energy during the off-peak period, costs could be saved on the new rate. Marketing of the new rates was conducted at a modest level and the marketing program emphasized changes in lifestyle such as the deferring of energy consuming tasks to the off-peak period. After the Three Mile Island incident in 1979, the utility lost some of its supply; this and other factors prompted new marketing strategies including more extensive publicity and targeted mailings to users of electric water heaters. Customers were required to take service under the time-of-day rate if they selected specific end-use applications or consumed over 1,000 kWh for 2 consecutive months. Special programs were initiated to aid customers in modifying water heaters to shift consumption to off-peak hours. These and other measures have led to the present situation in which 16.2% of the total residential rate class takes service under the time-of-day rate

  15. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  17. Turbulencelike Behavior of Seismic Time Series

    International Nuclear Information System (INIS)

    Manshour, P.; Saberi, S.; Sahimi, Muhammad; Peinke, J.; Pacheco, Amalio F.; Rahimi Tabar, M. Reza

    2009-01-01

    We report on a stochastic analysis of Earth's vertical velocity time series by using methods originally developed for complex hierarchical systems and, in particular, for turbulent flows. Analysis of the fluctuations of the detrended increments of the series reveals a pronounced transition in their probability density function from Gaussian to non-Gaussian. The transition occurs 5-10 hours prior to a moderate or large earthquake, hence representing a new and reliable precursor for detecting such earthquakes

  18. Characterizing time series: when Granger causality triggers complex networks

    Science.gov (United States)

    Ge, Tian; Cui, Yindong; Lin, Wei; Kurths, Jürgen; Liu, Chong

    2012-08-01

    In this paper, we propose a new approach to characterize time series with noise perturbations in both the time and frequency domains by combining Granger causality and complex networks. We construct directed and weighted complex networks from time series and use representative network measures to describe their physical and topological properties. Through analyzing the typical dynamical behaviors of some physical models and the MIT-BIHMassachusetts Institute of Technology-Beth Israel Hospital. human electrocardiogram data sets, we show that the proposed approach is able to capture and characterize various dynamics and has much potential for analyzing real-world time series of rather short length.

  19. Characterizing time series: when Granger causality triggers complex networks

    International Nuclear Information System (INIS)

    Ge Tian; Cui Yindong; Lin Wei; Liu Chong; Kurths, Jürgen

    2012-01-01

    In this paper, we propose a new approach to characterize time series with noise perturbations in both the time and frequency domains by combining Granger causality and complex networks. We construct directed and weighted complex networks from time series and use representative network measures to describe their physical and topological properties. Through analyzing the typical dynamical behaviors of some physical models and the MIT-BIH human electrocardiogram data sets, we show that the proposed approach is able to capture and characterize various dynamics and has much potential for analyzing real-world time series of rather short length. (paper)

  20. Measuring efficiency of international crude oil markets: A multifractality approach

    Science.gov (United States)

    Niere, H. M.

    2015-01-01

    The three major international crude oil markets are treated as complex systems and their multifractal properties are explored. The study covers daily prices of Brent crude, OPEC reference basket and West Texas Intermediate (WTI) crude from January 2, 2003 to January 2, 2014. A multifractal detrended fluctuation analysis (MFDFA) is employed to extract the generalized Hurst exponents in each of the time series. The generalized Hurst exponent is used to measure the degree of multifractality which in turn is used to quantify the efficiency of the three international crude oil markets. To identify whether the source of multifractality is long-range correlations or broad fat-tail distributions, shuffled data and surrogated data corresponding to each of the time series are generated. Shuffled data are obtained by randomizing the order of the price returns data. This will destroy any long-range correlation of the time series. Surrogated data is produced using the Fourier-Detrended Fluctuation Analysis (F-DFA). This is done by randomizing the phases of the price returns data in Fourier space. This will normalize the distribution of the time series. The study found that for the three crude oil markets, there is a strong dependence of the generalized Hurst exponents with respect to the order of fluctuations. This shows that the daily price time series of the markets under study have signs of multifractality. Using the degree of multifractality as a measure of efficiency, the results show that WTI is the most efficient while OPEC is the least efficient market. This implies that OPEC has the highest likelihood to be manipulated among the three markets. This reflects the fact that Brent and WTI is a very competitive market hence, it has a higher level of complexity compared against OPEC, which has a large monopoly power. Comparing with shuffled data and surrogated data, the findings suggest that for all the three crude oil markets, the multifractality is mainly due to long

  1. Multivariate time series analysis with R and financial applications

    CERN Document Server

    Tsay, Ruey S

    2013-01-01

    Since the publication of his first book, Analysis of Financial Time Series, Ruey Tsay has become one of the most influential and prominent experts on the topic of time series. Different from the traditional and oftentimes complex approach to multivariate (MV) time series, this sequel book emphasizes structural specification, which results in simplified parsimonious VARMA modeling and, hence, eases comprehension. Through a fundamental balance between theory and applications, the book supplies readers with an accessible approach to financial econometric models and their applications to real-worl

  2. Determinants Of Foreign Direct Investment In Mauritius Evidence From Time Series Data

    Directory of Open Access Journals (Sweden)

    Medha Kisto

    2017-08-01

    Full Text Available Over the last two decades Foreign Direct Investment FDI claimed an impressive economic record as it enables economy to transit from an agrarian to knowledge based economy. This paper focuses on the determinants and impact of FDI in Mauritius using annual time series data from 1975 through 2015. The Vector Error Correction Model VECM analysis reveals that macroeconomic variables namely inflation rates and exchange rate are among the major and important factor that affect FDI in Mauritius over this period of time. Exchange rate exhibited negative significant influence on FDI while interest rate affects FDI positively. The study therefore recommends that government should continue to diversify the export and tourism markets ensure stable macroeconomic policies implement reforms on doing business increase its expenditure in the area of infrastructural development and redirect FDI in productive sector of the economy as ways to accelerate the growth of Mauritian economy.

  3. Measurements of spatial population synchrony: influence of time series transformations.

    Science.gov (United States)

    Chevalier, Mathieu; Laffaille, Pascal; Ferdy, Jean-Baptiste; Grenouillet, Gaël

    2015-09-01

    Two mechanisms have been proposed to explain spatial population synchrony: dispersal among populations, and the spatial correlation of density-independent factors (the "Moran effect"). To identify which of these two mechanisms is driving spatial population synchrony, time series transformations (TSTs) of abundance data have been used to remove the signature of one mechanism, and highlight the effect of the other. However, several issues with TSTs remain, and to date no consensus has emerged about how population time series should be handled in synchrony studies. Here, by using 3131 time series involving 34 fish species found in French rivers, we computed several metrics commonly used in synchrony studies to determine whether a large-scale climatic factor (temperature) influenced fish population dynamics at the regional scale, and to test the effect of three commonly used TSTs (detrending, prewhitening and a combination of both) on these metrics. We also tested whether the influence of TSTs on time series and population synchrony levels was related to the features of the time series using both empirical and simulated time series. For several species, and regardless of the TST used, we evidenced a Moran effect on freshwater fish populations. However, these results were globally biased downward by TSTs which reduced our ability to detect significant signals. Depending on the species and the features of the time series, we found that TSTs could lead to contradictory results, regardless of the metric considered. Finally, we suggest guidelines on how population time series should be processed in synchrony studies.

  4. Stochastic time series analysis of hydrology data for water resources

    Science.gov (United States)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    The prediction to current publication of stochastic time series analysis in hydrology and seasonal stage. The different statistical tests for predicting the hydrology time series on Thomas-Fiering model. The hydrology time series of flood flow have accept a great deal of consideration worldwide. The concentration of stochastic process areas of time series analysis method are expanding with develop concerns about seasonal periods and global warming. The recent trend by the researchers for testing seasonal periods in the hydrologic flowseries using stochastic process on Thomas-Fiering model. The present article proposed to predict the seasonal periods in hydrology using Thomas-Fiering model.

  5. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  6. Nonlinear time series analysis of the human electrocardiogram

    International Nuclear Information System (INIS)

    Perc, Matjaz

    2005-01-01

    We analyse the human electrocardiogram with simple nonlinear time series analysis methods that are appropriate for graduate as well as undergraduate courses. In particular, attention is devoted to the notions of determinism and stationarity in physiological data. We emphasize that methods of nonlinear time series analysis can be successfully applied only if the studied data set originates from a deterministic stationary system. After positively establishing the presence of determinism and stationarity in the studied electrocardiogram, we calculate the maximal Lyapunov exponent, thus providing interesting insights into the dynamics of the human heart. Moreover, to facilitate interest and enable the integration of nonlinear time series analysis methods into the curriculum at an early stage of the educational process, we also provide user-friendly programs for each implemented method

  7. Multichannel biomedical time series clustering via hierarchical probabilistic latent semantic analysis.

    Science.gov (United States)

    Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary

    2014-11-01

    Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Hidden Markov Models for Time Series An Introduction Using R

    CERN Document Server

    Zucchini, Walter

    2009-01-01

    Illustrates the flexibility of HMMs as general-purpose models for time series data. This work presents an overview of HMMs for analyzing time series data, from continuous-valued, circular, and multivariate series to binary data, bounded and unbounded counts and categorical observations.

  9. Constructing ordinal partition transition networks from multivariate time series.

    Science.gov (United States)

    Zhang, Jiayang; Zhou, Jie; Tang, Ming; Guo, Heng; Small, Michael; Zou, Yong

    2017-08-10

    A growing number of algorithms have been proposed to map a scalar time series into ordinal partition transition networks. However, most observable phenomena in the empirical sciences are of a multivariate nature. We construct ordinal partition transition networks for multivariate time series. This approach yields weighted directed networks representing the pattern transition properties of time series in velocity space, which hence provides dynamic insights of the underling system. Furthermore, we propose a measure of entropy to characterize ordinal partition transition dynamics, which is sensitive to capturing the possible local geometric changes of phase space trajectories. We demonstrate the applicability of pattern transition networks to capture phase coherence to non-coherence transitions, and to characterize paths to phase synchronizations. Therefore, we conclude that the ordinal partition transition network approach provides complementary insight to the traditional symbolic analysis of nonlinear multivariate time series.

  10. Permutation entropy of finite-length white-noise time series.

    Science.gov (United States)

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  11. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  12. An Interrupted Time-Series Analysis of Durkheim's Social Deregulation Thesis: The Case of the Russian Federation.

    Science.gov (United States)

    Pridemore, William Alex; Chamlin, Mitchell B; Cochran, John K

    2007-06-01

    The dissolution of the Soviet Union resulted in sudden, widespread, and fundamental changes to Russian society. The former social welfare system-with its broad guarantees of employment, healthcare, education, and other forms of social support-was dismantled in the shift toward democracy, rule of law, and a free-market economy. This unique natural experiment provides a rare opportunity to examine the potentially disintegrative effects of rapid social change on deviance, and thus to evaluate one of Durkheim's core tenets. We took advantage of this opportunity by performing interrupted time-series analyses of annual age-adjusted homicide, suicide, and alcohol-related mortality rates for the Russian Federation using data from 1956 to 2002, with 1992-2002 as the postintervention time-frame. The ARIMA models indicate that, controlling for the long-term processes that generated these three time series, the breakup of the Soviet Union was associated with an appreciable increase in each of the cause-of-death rates. We interpret these findings as being consistent with the Durkheimian hypothesis that rapid social change disrupts social order, thereby increasing the level of crime and deviance.

  13. Timing calibration and spectral cleaning of LOFAR time series data

    NARCIS (Netherlands)

    Corstanje, A.; Buitink, S.; Enriquez, J. E.; Falcke, H.; Horandel, J. R.; Krause, M.; Nelles, A.; Rachen, J. P.; Schellart, P.; Scholten, O.; ter Veen, S.; Thoudam, S.; Trinh, T. N. G.

    We describe a method for spectral cleaning and timing calibration of short time series data of the voltage in individual radio interferometer receivers. It makes use of phase differences in fast Fourier transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are

  14. Time-Series Analysis: A Cautionary Tale

    Science.gov (United States)

    Damadeo, Robert

    2015-01-01

    Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.

  15. Characterizing interdependencies of multiple time series theory and applications

    CERN Document Server

    Hosoya, Yuzo; Takimoto, Taro; Kinoshita, Ryo

    2017-01-01

    This book introduces academic researchers and professionals to the basic concepts and methods for characterizing interdependencies of multiple time series in the frequency domain. Detecting causal directions between a pair of time series and the extent of their effects, as well as testing the non existence of a feedback relation between them, have constituted major focal points in multiple time series analysis since Granger introduced the celebrated definition of causality in view of prediction improvement. Causality analysis has since been widely applied in many disciplines. Although most analyses are conducted from the perspective of the time domain, a frequency domain method introduced in this book sheds new light on another aspect that disentangles the interdependencies between multiple time series in terms of long-term or short-term effects, quantitatively characterizing them. The frequency domain method includes the Granger noncausality test as a special case. Chapters 2 and 3 of the book introduce an i...

  16. A perturbative approach for enhancing the performance of time series forecasting.

    Science.gov (United States)

    de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C

    2017-04-01

    This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Drunk driving detection based on classification of multivariate time series.

    Science.gov (United States)

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  18. Evaluation of scaling invariance embedded in short time series.

    Directory of Open Access Journals (Sweden)

    Xue Pan

    Full Text Available Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2. Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03 and sharp confidential interval (standard deviation ≤0.05. Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  19. Evaluation of scaling invariance embedded in short time series.

    Science.gov (United States)

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  20. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  1. Insurance-markets Equilibrium with Sequential Non-convex Straight-time and Over-time Labor Supply

    OpenAIRE

    Vasilev, Aleksandar

    2016-01-01

    This note describes the lottery- and insurance-market equilibrium in an economy with non-convex straight-time and overtime employment. In contrast to Hansen and Sargent (1988), the overtime-decision is a sequential one. This requires two separate insurance market to operate, one for straight-time work, and one for overtime. In addi- tion, given that the labor choice for regular and overtime hours is made in succession, the insurance market for overtime needs to open once the insurance market ...

  2. Geomechanical time series and its singularity spectrum analysis

    Czech Academy of Sciences Publication Activity Database

    Lyubushin, Alexei A.; Kaláb, Zdeněk; Lednická, Markéta

    2012-01-01

    Roč. 47, č. 1 (2012), s. 69-77 ISSN 1217-8977 R&D Projects: GA ČR GA105/09/0089 Institutional research plan: CEZ:AV0Z30860518 Keywords : geomechanical time series * singularity spectrum * time series segmentation * laser distance meter Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.347, year: 2012 http://www.akademiai.com/content/88v4027758382225/fulltext.pdf

  3. Pseudo-random bit generator based on lag time series

    Science.gov (United States)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  4. Analysis of JET ELMy time series

    International Nuclear Information System (INIS)

    Zvejnieks, G.; Kuzovkov, V.N.

    2005-01-01

    Full text: Achievement of the planned operational regime in the next generation tokamaks (such as ITER) still faces principal problems. One of the main challenges is obtaining the control of edge localized modes (ELMs), which should lead to both long plasma pulse times and reasonable divertor life time. In order to control ELMs the hypothesis was proposed by Degeling [1] that ELMs exhibit features of chaotic dynamics and thus a standard chaos control methods might be applicable. However, our findings which are based on the nonlinear autoregressive (NAR) model contradict this hypothesis for JET ELMy time-series. In turn, it means that ELM behavior is of a relaxation or random type. These conclusions coincide with our previous results obtained for ASDEX Upgrade time series [2]. [1] A.W. Degeling, Y.R. Martin, P.E. Bak, J. B.Lister, and X. Llobet, Plasma Phys. Control. Fusion 43, 1671 (2001). [2] G. Zvejnieks, V.N. Kuzovkov, O. Dumbrajs, A.W. Degeling, W. Suttrop, H. Urano, and H. Zohm, Physics of Plasmas 11, 5658 (2004)

  5. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  6. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  7. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  8. Two Machine Learning Approaches for Short-Term Wind Speed Time-Series Prediction.

    Science.gov (United States)

    Ak, Ronay; Fink, Olga; Zio, Enrico

    2016-08-01

    The increasing liberalization of European electricity markets, the growing proportion of intermittent renewable energy being fed into the energy grids, and also new challenges in the patterns of energy consumption (such as electric mobility) require flexible and intelligent power grids capable of providing efficient, reliable, economical, and sustainable energy production and distribution. From the supplier side, particularly, the integration of renewable energy sources (e.g., wind and solar) into the grid imposes an engineering and economic challenge because of the limited ability to control and dispatch these energy sources due to their intermittent characteristics. Time-series prediction of wind speed for wind power production is a particularly important and challenging task, wherein prediction intervals (PIs) are preferable results of the prediction, rather than point estimates, because they provide information on the confidence in the prediction. In this paper, two different machine learning approaches to assess PIs of time-series predictions are considered and compared: 1) multilayer perceptron neural networks trained with a multiobjective genetic algorithm and 2) extreme learning machines combined with the nearest neighbors approach. The proposed approaches are applied for short-term wind speed prediction from a real data set of hourly wind speed measurements for the region of Regina in Saskatchewan, Canada. Both approaches demonstrate good prediction precision and provide complementary advantages with respect to different evaluation criteria.

  9. Jumps and stochastic volatility in oil prices: Time series evidence

    International Nuclear Information System (INIS)

    Larsson, Karl; Nossman, Marcus

    2011-01-01

    In this paper we examine the empirical performance of affine jump diffusion models with stochastic volatility in a time series study of crude oil prices. We compare four different models and estimate them using the Markov Chain Monte Carlo method. The support for a stochastic volatility model including jumps in both prices and volatility is strong and the model clearly outperforms the others in terms of a superior fit to data. Our estimation method allows us to obtain a detailed study of oil prices during two periods of extreme market stress included in our sample; the Gulf war and the recent financial crisis. We also address the economic significance of model choice in two option pricing applications. The implied volatilities generated by the different estimated models are compared and we price a real option to develop an oil field. Our findings indicate that model choice can have a material effect on the option values.

  10. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  11. Multiscale sample entropy and cross-sample entropy based on symbolic representation and similarity of stock markets

    Science.gov (United States)

    Wu, Yue; Shang, Pengjian; Li, Yilong

    2018-03-01

    A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.

  12. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.

  13. Time Series Outlier Detection Based on Sliding Window Prediction

    Directory of Open Access Journals (Sweden)

    Yufeng Yu

    2014-01-01

    Full Text Available In order to detect outliers in hydrological time series data for improving data quality and decision-making quality related to design, operation, and management of water resources, this research develops a time series outlier detection method for hydrologic data that can be used to identify data that deviate from historical patterns. The method first built a forecasting model on the history data and then used it to predict future values. Anomalies are assumed to take place if the observed values fall outside a given prediction confidence interval (PCI, which can be calculated by the predicted value and confidence coefficient. The use of PCI as threshold is mainly on the fact that it considers the uncertainty in the data series parameters in the forecasting model to address the suitable threshold selection problem. The method performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no preclassification of anomalies. Experiments with different hydrologic real-world time series showed that the proposed methods are fast and correctly identify abnormal data and can be used for hydrologic time series analysis.

  14. Metagenomics meets time series analysis: unraveling microbial community dynamics

    NARCIS (Netherlands)

    Faust, K.; Lahti, L.M.; Gonze, D.; Vos, de W.M.; Raes, J.

    2015-01-01

    The recent increase in the number of microbial time series studies offers new insights into the stability and dynamics of microbial communities, from the world's oceans to human microbiota. Dedicated time series analysis tools allow taking full advantage of these data. Such tools can reveal periodic

  15. Time series forecasting based on deep extreme learning machine

    NARCIS (Netherlands)

    Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan

    2017-01-01

    Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in

  16. False-nearest-neighbors algorithm and noise-corrupted time series

    International Nuclear Information System (INIS)

    Rhodes, C.; Morari, M.

    1997-01-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented. copyright 1997 The American Physical Society

  17. CauseMap: fast inference of causality from complex time series.

    Science.gov (United States)

    Maher, M Cyrus; Hernandez, Ryan D

    2015-01-01

    Background. Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data. Results. Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM), a method for establishing causality from long time series data (≳25 observations). Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens' Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If reconstructed shadows can predict points from opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g., a single individual, it may be a valuable tool to personalized medicine. We implement CCM in Julia, a

  18. CauseMap: fast inference of causality from complex time series

    Directory of Open Access Journals (Sweden)

    M. Cyrus Maher

    2015-03-01

    Full Text Available Background. Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data.Results. Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM, a method for establishing causality from long time series data (≳25 observations. Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens’ Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If reconstructed shadows can predict points from opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g., a single individual, it may be a valuable tool to personalized medicine. We implement

  19. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  20. Modeling category-level purchase timing with brand-level marketing variables

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard)

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from

  1. Track Irregularity Time Series Analysis and Trend Forecasting

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2012-01-01

    Full Text Available The combination of linear and nonlinear methods is widely used in the prediction of time series data. This paper analyzes track irregularity time series data by using gray incidence degree models and methods of data transformation, trying to find the connotative relationship between the time series data. In this paper, GM (1,1 is based on first-order, single variable linear differential equations; after an adaptive improvement and error correction, it is used to predict the long-term changing trend of track irregularity at a fixed measuring point; the stochastic linear AR, Kalman filtering model, and artificial neural network model are applied to predict the short-term changing trend of track irregularity at unit section. Both long-term and short-term changes prove that the model is effective and can achieve the expected accuracy.

  2. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  3. Local normalization: Uncovering correlations in non-stationary financial time series

    Science.gov (United States)

    Schäfer, Rudi; Guhr, Thomas

    2010-09-01

    The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.

  4. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  5. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...

  6. Mean reversion in the US stock market

    International Nuclear Information System (INIS)

    Serletis, Apostolos; Rosenberg, Aryeh Adam

    2009-01-01

    This paper revisits the evidence for the weaker form of the efficient market hypothesis, building on recent work by Serletis and Shintani [Serletis A, Shintani M. No evidence of chaos but some evidence of dependence in the US stock market. Chaos, Solitons and Fractals 2003;17:449-54], Elder and Serletis [Elder J, Serletis A. On fractional integrating dynamics in the US stock market. Chaos, Solitons and Fractals 2007;34;777-81], Koustas et al. [Koustas Z, Lamarche J.-F, Serletis A. Threshold random walks in the US stock market. Chaos, Solitons and Fractals, forthcoming], Hinich and Serletis [Hinich M, Serletis A. Randomly modulated periodicity in the US stock market. Chaos, Solitons and Fractals, forthcoming], and Serletis et al. [Serletis A, Uritskaya OY, Uritsky VM. Detrended Fluctuation analysis of the US stock market. Int J Bifurc Chaos, forthcoming]. In doing so, we use daily data, over the period from 5 February 1971 to 1 December 2006 (a total of 9045 observations) on four US stock market indexes - the Dow Jones Industrial Average, the Standard and Poor's 500 Index, the NASDAQ Composite Index, and the NYSE Composite Index - and a new statistical physics approach - namely the 'detrending moving average (DMA)' technique, recently introduced by Alessio et al. [Alessio E, Carbone A, Castelli G, Frappietro V. Second-order moving average and scaling of stochastic time series. Euro Phys J B 2002;27;197-200.] and further developed by Carbone et al. [Carbone A, Castelli G, Stanley HE. Time dependent hurst exponent in financial time series. Physica A 2004;344;267-71, Carbone A, Castelli G, Stanley HE. Analysis of clusters formed by the moving average of a long-range correlated time series. Phys Rev E 2004;69;026105.]. The robustness of the results to the use of alternative testing methodologies is also investigated, by using Lo's [Lo AW. Long-term memory in stock market prices. Econometrica 1991;59:1279-313.] modified rescaled range analysis. We conclude that US stock

  7. Mean reversion in the US stock market

    Energy Technology Data Exchange (ETDEWEB)

    Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta, T2N 1N4 (Canada)], E-mail: Serletis@ucalgary.ca; Rosenberg, Aryeh Adam [Department of Economics, University of Calgary, Calgary, Alberta, T2N 1N4 (Canada)

    2009-05-30

    This paper revisits the evidence for the weaker form of the efficient market hypothesis, building on recent work by Serletis and Shintani [Serletis A, Shintani M. No evidence of chaos but some evidence of dependence in the US stock market. Chaos, Solitons and Fractals 2003;17:449-54], Elder and Serletis [Elder J, Serletis A. On fractional integrating dynamics in the US stock market. Chaos, Solitons and Fractals 2007;34;777-81], Koustas et al. [Koustas Z, Lamarche J.-F, Serletis A. Threshold random walks in the US stock market. Chaos, Solitons and Fractals, forthcoming], Hinich and Serletis [Hinich M, Serletis A. Randomly modulated periodicity in the US stock market. Chaos, Solitons and Fractals, forthcoming], and Serletis et al. [Serletis A, Uritskaya OY, Uritsky VM. Detrended Fluctuation analysis of the US stock market. Int J Bifurc Chaos, forthcoming]. In doing so, we use daily data, over the period from 5 February 1971 to 1 December 2006 (a total of 9045 observations) on four US stock market indexes - the Dow Jones Industrial Average, the Standard and Poor's 500 Index, the NASDAQ Composite Index, and the NYSE Composite Index - and a new statistical physics approach - namely the 'detrending moving average (DMA)' technique, recently introduced by Alessio et al. [Alessio E, Carbone A, Castelli G, Frappietro V. Second-order moving average and scaling of stochastic time series. Euro Phys J B 2002;27;197-200.] and further developed by Carbone et al. [Carbone A, Castelli G, Stanley HE. Time dependent hurst exponent in financial time series. Physica A 2004;344;267-71, Carbone A, Castelli G, Stanley HE. Analysis of clusters formed by the moving average of a long-range correlated time series. Phys Rev E 2004;69;026105.]. The robustness of the results to the use of alternative testing methodologies is also investigated, by using Lo's [Lo AW. Long-term memory in stock market prices. Econometrica 1991;59:1279-313.] modified rescaled range analysis. We

  8. Impact of pharmaceutical policy interventions on utilization of antipsychotic medicines in Finland and Portugal in times of economic recession: interrupted time series analyses.

    Science.gov (United States)

    Leopold, Christine; Zhang, Fang; Mantel-Teeuwisse, Aukje K; Vogler, Sabine; Valkova, Silvia; Ross-Degnan, Dennis; Wagner, Anita K

    2014-07-25

    To analyze the impacts of pharmaceutical sector policies implemented to contain country spending during the economic recession--a reference price system in Finland and a mix of policies including changes in reimbursement rates, a generic promotion campaign and discounts granted to the public payer in Portugal - on utilization of, as a proxy for access to, antipsychotic medicines. We obtained monthly IMS Health sales data in standard units of antipsychotic medicines in Portugal and Finland for the period January 2007 to December 2011. We used an interrupted time series design to estimate changes in overall use and generic market shares by comparing pre-policy and post-policy levels and trends. Both countries' policy approaches were associated with slight, likely unintended, decreases in overall use of antipsychotic medicines and with increases in generic market shares of major antipsychotic products. In Finland, quetiapine and risperidone generic market shares increased substantially (estimates one year post-policy compared to before, quetiapine: 6.80% [3.92%, 9.68%]; risperidone: 11.13% [6.79%, 15.48%]. The policy interventions in Portugal resulted in a substantially increased generic market share for amisulpride (estimate one year post-policy compared to before: 22.95% [21.01%, 24.90%]; generic risperidone already dominated the market prior to the policy interventions. Different policy approaches to contain pharmaceutical expenditures in times of the economic recession in Finland and Portugal had intended--increased use of generics--and likely unintended--slightly decreased overall sales, possibly consistent with decreased access to needed medicines--impacts. These findings highlight the importance of monitoring and evaluating the effects of pharmaceutical policy interventions on use of medicines and health outcomes.

  9. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  10. The study of Thai stock market across the 2008 financial crisis

    Science.gov (United States)

    Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik

    2016-11-01

    The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.

  11. Time-varying economic dominance in financial markets: A bistable dynamics approach

    Science.gov (United States)

    He, Xue-Zhong; Li, Kai; Wang, Chuncheng

    2018-05-01

    By developing a continuous-time heterogeneous agent financial market model of multi-assets traded by fundamental and momentum investors, we provide a potential mechanism for generating time-varying dominance between fundamental and non-fundamental in financial markets. We show that investment constraints lead to the coexistence of a locally stable fundamental steady state and a locally stable limit cycle around the fundamental, characterized by a Bautin bifurcation. This provides a mechanism for market prices to switch stochastically between the two persistent but very different market states, leading to the coexistence and time-varying dominance of seemingly controversial efficient market and price momentum over different time periods. The model also generates other financial market stylized facts, such as spillover effects in both momentum and volatility, market booms, crashes, and correlation reduction due to cross-sectional momentum trading. Empirical evidence based on the U.S. market supports the main findings. The mechanism developed in this paper can be used to characterize time-varying economic dominance in economics and finance in general.

  12. The Prediction of Teacher Turnover Employing Time Series Analysis.

    Science.gov (United States)

    Costa, Crist H.

    The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…

  13. Record statistics of financial time series and geometric random walks.

    Science.gov (United States)

    Sabir, Behlool; Santhanam, M S

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  14. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  15. Chaotic time series prediction: From one to another

    International Nuclear Information System (INIS)

    Zhao Pengfei; Xing Lei; Yu Jun

    2009-01-01

    In this Letter, a new local linear prediction model is proposed to predict a chaotic time series of a component x(t) by using the chaotic time series of another component y(t) in the same system with x(t). Our approach is based on the phase space reconstruction coming from the Takens embedding theorem. To illustrate our results, we present an example of Lorenz system and compare with the performance of the original local linear prediction model.

  16. Forecasting autoregressive time series under changing persistence

    DEFF Research Database (Denmark)

    Kruse, Robinson

    Changing persistence in time series models means that a structural change from nonstationarity to stationarity or vice versa occurs over time. Such a change has important implications for forecasting, as negligence may lead to inaccurate model predictions. This paper derives generally applicable...

  17. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    Science.gov (United States)

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  18. Early experience in centralized real time energy market

    International Nuclear Information System (INIS)

    Alaywan, Z.; Hernandez, L.; Martin, M.

    2005-01-01

    The current structure of the California Independent System Operator (ISO) was described. The study provided an outline of California's transition from a decentralized pool operation to a forward bilateral market through the implementation of a centralized real time market. Details of the institutional, economic and technological history of the power system were provided. Although the California real time market was implemented in order to simplify the power system, a number of operational challenges were observed. Discontinuities in the energy curve resulted in the implementation of a target price process, which aimed to resolve the overlap in energy bids. The design of the ISO's real time market did not provide a mechanism for bidders to execute real time energy trades. Regulation bidders also internalized energy in their regulation capacity bids. The real time market application (RTMA) provided the ISO with a substantial computer program to determine and account for nearly all aspects of generation unit scheduling and physical characteristics with a multiple ramp rate. The program combined optimal power flow (OPF) logic for energy flows in addition to mixed-integer nonlinear optimization of trading schedules, and system and security constraints. The RTMA used a multi-period security constrained economic dispatch (SCED) function to optimize energy dispatch schedules. Other features of the RTMA included security constrained unit commitment, security constrained economic dispatch, and dispatch schedule post processes. It was concluded that implementation of the RTMA has increased the efficiency of the ISO. A case study of the RTMA during an outage in November 2004 was provided. 5 refs., 1 tab., 2 figs

  19. Conditional time series forecasting with convolutional neural networks

    NARCIS (Netherlands)

    A. Borovykh (Anastasia); S.M. Bohte (Sander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractForecasting financial time series using past observations has been a significant topic of interest. While temporal relationships in the data exist, they are difficult to analyze and predict accurately due to the non-linear trends and noise present in the series. We propose to learn these

  20. Testing the evolution of crude oil market efficiency: Data have the conn

    International Nuclear Information System (INIS)

    Zhang, Bing; Li, Xiao-Ming; He, Fei

    2014-01-01

    Utilising a time-varying GAR (1)-TGARCH (1,1) model with different frequency data, we investigate the weak-form efficiency of major global crude oil spot markets in Europe, the US, the UAE and China for the period from December 2001 to August 2013. Our empirical results with weekly data indicate that all four markets have reached efficiency with few brief inefficient periods during the past decade, whereas the daily crude oil returns series suggest intermittent and inconsistent efficiency. We argue that the weekly Friday series fit the data better than the average series in autocorrelation tests. The evidence suggests that all four markets exhibit asymmetries in return-volatility reactions to different information shocks and that they react more strongly to bad news than to good news. The 2008 financial crisis has significantly affected the efficiency of oil markets. Furthermore, a comovement phenomenon and volatility spillover effects exist among the oil markets. Policy recommendations consistent with our empirical results are proposed, which address three issues: implementing prudential regulations, establishing an Asian pricing centre and improving transparency in crude oil spot markets. - Highlights: • We adopt a time-varying model to test the weak-form efficiency of crude oil markets. • Weekly oil returns series have been extremely efficient during the past decade. • Daily oil returns series have presented intermittent and inconsistent efficiency. • Oil markets react asymmetrically to different information shocks. • Policy recommendations are proposed according to the degree of efficiency

  1. Analysis of distributed-generation photovoltaic deployment, installation time and cost, market barriers, and policies in China

    International Nuclear Information System (INIS)

    Zhang, Fang; Deng, Hao; Margolis, Robert; Su, Jun

    2015-01-01

    Beginning in 2013, China's photovoltaic (PV) market-development strategy witnessed a series of policy changes aimed at making distributed-generation PV (DG PV) development an equal priority with large-scale PV development. This article reviews the DG PV policy changes since 2013 and examines their effect on China's domestic DG PV market. Based on a 2014 survey of DG PV market and policy participants, we present cost and time breakdowns for installing DG PV projects in China, and we identify the main barriers to DG PV installation. We also use a cash flow model to determine the relative economic attractiveness of DG PV in several eastern provinces in China. The main factors constraining DG PV deployment in China include financial barriers resulting from the structure of the self-consumption feed-in tariff (FIT), ambivalence about DG PV within grid companies, complicated ownership structures for buildings/rooftops/businesses, and the inherent time lag in policy implementation from the central government to provincial and local governments. We conclude with policy implications and suggestions in the context of DG PV policy changes the Chinese government implemented in September 2014. -- Highlights: •We review China's distributed PV market development and policy changes since 2013. •We present cost and time requirements for installing distributed PV in China. •We conduct IRR analysis of distributed PV under different policy frameworks. •We identify barriers to China's distributed PV, especially feed-in tariff barriers

  2. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  3. Time series analysis of temporal networks

    Science.gov (United States)

    Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh

    2016-01-01

    A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue

  4. The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure

    KAUST Repository

    Euá n, Carolina; Ombao, Hernando; Ortega, Joaquí n

    2018-01-01

    We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms

  5. Notes on economic time series analysis system theoretic perspectives

    CERN Document Server

    Aoki, Masanao

    1983-01-01

    In seminars and graduate level courses I have had several opportunities to discuss modeling and analysis of time series with economists and economic graduate students during the past several years. These experiences made me aware of a gap between what economic graduate students are taught about vector-valued time series and what is available in recent system literature. Wishing to fill or narrow the gap that I suspect is more widely spread than my personal experiences indicate, I have written these notes to augment and reor­ ganize materials I have given in these courses and seminars. I have endeavored to present, in as much a self-contained way as practicable, a body of results and techniques in system theory that I judge to be relevant and useful to economists interested in using time series in their research. I have essentially acted as an intermediary and interpreter of system theoretic results and perspectives in time series by filtering out non-essential details, and presenting coherent accounts of wha...

  6. Use of a Principal Components Analysis for the Generation of Daily Time Series.

    Science.gov (United States)

    Dreveton, Christine; Guillou, Yann

    2004-07-01

    A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.

  7. Market integration among electricity markets and their major fuel source markets

    International Nuclear Information System (INIS)

    Mjelde, James W.; Bessler, David A.

    2009-01-01

    Dynamic price information flows among U.S. electricity wholesale spot prices and the prices of the major electricity generation fuel sources, natural gas, uranium, coal, and crude oil, are studied. Multivariate time series methods applied to weekly price data show that in contemporaneous time peak electricity prices move natural gas prices, which in turn influence crude oil. In the long run, price is discovered in the fuel sources market (except uranium), as these prices are weakly exogenous in a reduced rank regression representation of these energy prices.

  8. Dynamical analysis and visualization of tornadoes time series.

    Directory of Open Access Journals (Sweden)

    António M Lopes

    Full Text Available In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  9. Dynamical analysis and visualization of tornadoes time series.

    Science.gov (United States)

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  10. "Observation Obscurer" - Time Series Viewer, Editor and Processor

    Science.gov (United States)

    Andronov, I. L.

    The program is described, which contains a set of subroutines suitable for East viewing and interactive filtering and processing of regularly and irregularly spaced time series. Being a 32-bit DOS application, it may be used as a default fast viewer/editor of time series in any compute shell ("commander") or in Windows. It allows to view the data in the "time" or "phase" mode, to remove ("obscure") or filter outstanding bad points; to make scale transformations and smoothing using few methods (e.g. mean with phase binning, determination of the statistically opti- mal number of phase bins; "running parabola" (Andronov, 1997, As. Ap. Suppl, 125, 207) fit and to make time series analysis using some methods, e.g. correlation, autocorrelation and histogram analysis: determination of extrema etc. Some features have been developed specially for variable star observers, e.g. the barycentric correction, the creation and fast analysis of "OC" diagrams etc. The manual for "hot keys" is presented. The computer code was compiled with a 32-bit Free Pascal (www.freepascal.org).

  11. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  12. Multiscale Poincaré plots for visualizing the structure of heartbeat time series.

    Science.gov (United States)

    Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L

    2016-02-09

    Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.

  13. Time series patterns and language support in DBMS

    Science.gov (United States)

    Telnarova, Zdenka

    2017-07-01

    This contribution is focused on pattern type Time Series as a rich in semantics representation of data. Some example of implementation of this pattern type in traditional Data Base Management Systems is briefly presented. There are many approaches how to manipulate with patterns and query patterns. Crucial issue can be seen in systematic approach to pattern management and specific pattern query language which takes into consideration semantics of patterns. Query language SQL-TS for manipulating with patterns is shown on Time Series data.

  14. An Interrupted Time-Series Analysis of Durkheim's Social Deregulation Thesis: The Case of the Russian Federation

    Science.gov (United States)

    Pridemore, William Alex; Chamlin, Mitchell B.; Cochran, John K.

    2009-01-01

    The dissolution of the Soviet Union resulted in sudden, widespread, and fundamental changes to Russian society. The former social welfare system-with its broad guarantees of employment, healthcare, education, and other forms of social support-was dismantled in the shift toward democracy, rule of law, and a free-market economy. This unique natural experiment provides a rare opportunity to examine the potentially disintegrative effects of rapid social change on deviance, and thus to evaluate one of Durkheim's core tenets. We took advantage of this opportunity by performing interrupted time-series analyses of annual age-adjusted homicide, suicide, and alcohol-related mortality rates for the Russian Federation using data from 1956 to 2002, with 1992-2002 as the postintervention time-frame. The ARIMA models indicate that, controlling for the long-term processes that generated these three time series, the breakup of the Soviet Union was associated with an appreciable increase in each of the cause-of-death rates. We interpret these findings as being consistent with the Durkheimian hypothesis that rapid social change disrupts social order, thereby increasing the level of crime and deviance. PMID:20165565

  15. Nonlinear time series analysis with R

    CERN Document Server

    Huffaker, Ray; Rosa, Rodolfo

    2017-01-01

    In the process of data analysis, the investigator is often facing highly-volatile and random-appearing observed data. A vast body of literature shows that the assumption of underlying stochastic processes was not necessarily representing the nature of the processes under investigation and, when other tools were used, deterministic features emerged. Non Linear Time Series Analysis (NLTS) allows researchers to test whether observed volatility conceals systematic non linear behavior, and to rigorously characterize governing dynamics. Behavioral patterns detected by non linear time series analysis, along with scientific principles and other expert information, guide the specification of mechanistic models that serve to explain real-world behavior rather than merely reproducing it. Often there is a misconception regarding the complexity of the level of mathematics needed to understand and utilize the tools of NLTS (for instance Chaos theory). However, mathematics used in NLTS is much simpler than many other subjec...

  16. InSAR Deformation Time Series Processed On-Demand in the Cloud

    Science.gov (United States)

    Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time

  17. Modeling category-level purchase timing with brand-level marketing variables

    OpenAIRE

    Fok, D.; Paap, R.

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from the marketing mix of individual brands. In this paper we discuss two standard approaches suggested in the literature to solve this problem, that is, using individual choice shares as weights to aver...

  18. Using Social Marketing Theory as a Framework for Understanding and Increasing HPV Vaccine Series Completion Among Hispanic Adolescents: A Qualitative Study.

    Science.gov (United States)

    Roncancio, Angelica M; Ward, Kristy K; Carmack, Chakema C; Muñoz, Becky T; Cano, Miguel A; Cribbs, Felicity

    2017-02-01

    HPV vaccine series completion rates among adolescent Hispanic females and males (~39 and 21 %, respectively) are far below the Healthy People 80 % coverage goal. Completion of the 3-dose vaccine series is critical to reducing the incidence of HPV-associated cancers. This formative study applies social marketing theory to assess the needs and preferences of Hispanic mothers in order to guide the development of interventions to increase HPV vaccine completion. We conducted 51 in-depth interviews with Hispanic mothers of adolescents to identify the key concepts of social marketing theory (i.e., the four P's: product, price, place and promotion). Results suggest that a desire complete the vaccine series, vaccine reminders and preventing illnesses and protecting their children against illnesses and HPV all influence vaccination (product). The majority of Completed mothers did not experience barriers that prevented vaccine series completion and Initiated mothers perceived a lack of health insurance and the cost of the vaccine as potential barriers. Informational barriers were prevalent across both market segments (price). Clinics are important locations for deciding to complete the vaccine series (place). They are the preferred sources to obtain information about the HPV vaccine thus making them ideal locations to deliver intervention messages, followed by television, the child's school and brochures (promotion). Increasing HPV vaccine coverage among Hispanic adolescents will reduce the rates of HPV-associated cancers and the cervical cancer health disparity among Hispanic women. This research can inform the development of an intervention to increase HPV vaccine series completion in this population.

  19. A New Methodology Based on Imbalanced Classification for Predicting Outliers in Electricity Demand Time Series

    Directory of Open Access Journals (Sweden)

    Francisco Javier Duque-Pintor

    2016-09-01

    Full Text Available The occurrence of outliers in real-world phenomena is quite usual. If these anomalous data are not properly treated, unreliable models can be generated. Many approaches in the literature are focused on a posteriori detection of outliers. However, a new methodology to a priori predict the occurrence of such data is proposed here. Thus, the main goal of this work is to predict the occurrence of outliers in time series, by using, for the first time, imbalanced classification techniques. In this sense, the problem of forecasting outlying data has been transformed into a binary classification problem, in which the positive class represents the occurrence of outliers. Given that the number of outliers is much lower than the number of common values, the resultant classification problem is imbalanced. To create training and test sets, robust statistical methods have been used to detect outliers in both sets. Once the outliers have been detected, the instances of the dataset are labeled accordingly. Namely, if any of the samples composing the next instance are detected as an outlier, the label is set to one. As a study case, the methodology has been tested on electricity demand time series in the Spanish electricity market, in which most of the outliers were properly forecast.

  20. Vector bilinear autoregressive time series model and its superiority ...

    African Journals Online (AJOL)

    In this research, a vector bilinear autoregressive time series model was proposed and used to model three revenue series (X1, X2, X3) . The “orders” of the three series were identified on the basis of the distribution of autocorrelation and partial autocorrelation functions and were used to construct the vector bilinear models.

  1. Market and Style Timing: German Equity and Bond Funds

    OpenAIRE

    Hayley, S.; Nitzsche, D.; Cuthbertson, K.

    2016-01-01

    We apply parametric and non-parametric estimates to test market and style timing ability of individual German equity and bond mutual funds using a sample of over 500 equity and 350 bond funds, over the period 1990-2009. For equity funds, both approaches indicate no successful market timers in the 1990-1999 or 2000-2009 periods, but in 2000-2009 the non-parametric approach gives fewer unsuccessful market timers than the parametric approach. There is evidence of successful style timing using th...

  2. A study of market efficiency in the stock market, forex market and bullion market in India

    OpenAIRE

    Sarker, Debnarayan; Ghosh, Bikash Kumar

    2007-01-01

    This study suggests that, run test, which are based on signs of indices / rates, do not reject efficient market hypothesis in the case of all the three markets, whereas VR tests, which capture the variation in permanent component of the series as a ratio to the total variation, reject the efficient market hypothesis in the case of the gold markets. Efficient market hypothesis in the case of Stock markets, Forex markets and Silver markets cannot be rejected based on VR tests. Since VR tests a...

  3. 25 years of time series forecasting

    NARCIS (Netherlands)

    de Gooijer, J.G.; Hyndman, R.J.

    2006-01-01

    We review the past 25 years of research into time series forecasting. In this silver jubilee issue, we naturally highlight results published in journals managed by the International Institute of Forecasters (Journal of Forecasting 1982-1985 and International Journal of Forecasting 1985-2005). During

  4. Markov Trends in Macroeconomic Time Series

    NARCIS (Netherlands)

    R. Paap (Richard)

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the

  5. On clustering fMRI time series

    DEFF Research Database (Denmark)

    Goutte, Cyril; Toft, Peter Aundal; Rostrup, E.

    1999-01-01

    Analysis of fMRI time series is often performed by extracting one or more parameters for the individual voxels. Methods based, e.g., on various statistical tests are then used to yield parameters corresponding to probability of activation or activation strength. However, these methods do...

  6. FALSE DETERMINATIONS OF CHAOS IN SHORT NOISY TIME SERIES. (R828745)

    Science.gov (United States)

    A method (NEMG) proposed in 1992 for diagnosing chaos in noisy time series with 50 or fewer observations entails fitting the time series with an empirical function which predicts an observation in the series from previous observations, and then estimating the rate of divergenc...

  7. Impact of Stock Market Structure on Intertrade Time and Price Dynamics

    Science.gov (United States)

    Ivanov, Plamen Ch.; Yuen, Ainslie; Perakakis, Pandelis

    2014-01-01

    We analyse times between consecutive transactions for a diverse group of stocks registered on the NYSE and NASDAQ markets, and we relate the dynamical properties of the intertrade times with those of the corresponding price fluctuations. We report that market structure strongly impacts the scale-invariant temporal organisation in the transaction timing of stocks, which we have observed to have long-range power-law correlations. Specifically, we find that, compared to NYSE stocks, stocks registered on the NASDAQ exhibit significantly stronger correlations in their transaction timing on scales within a trading day. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing on scales within a trading day, indicating influences of market structure. We also report a persistent decrease in correlation strength of intertrade times with increasing average intertrade time and with corresponding decrease in companies' market capitalization–a trend which is less pronounced for NASDAQ stocks. Surprisingly, we observe that stronger power-law correlations in intertrade times are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of intertrade times and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ markets, we demonstrate that the stronger correlations we find in intertrade times for NASDAQ stocks are associated with stronger correlations in absolute price returns and with higher volatility, suggesting that market structure may affect price behavior through information contained in transaction timing. These findings do not support the hypothesis of universal scaling behavior in stock dynamics that is independent of company characteristics and stock market structure. Further, our results have implications for utilising transaction timing

  8. Impact of stock market structure on intertrade time and price dynamics.

    Science.gov (United States)

    Ivanov, Plamen Ch; Yuen, Ainslie; Perakakis, Pandelis

    2014-01-01

    We analyse times between consecutive transactions for a diverse group of stocks registered on the NYSE and NASDAQ markets, and we relate the dynamical properties of the intertrade times with those of the corresponding price fluctuations. We report that market structure strongly impacts the scale-invariant temporal organisation in the transaction timing of stocks, which we have observed to have long-range power-law correlations. Specifically, we find that, compared to NYSE stocks, stocks registered on the NASDAQ exhibit significantly stronger correlations in their transaction timing on scales within a trading day. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing on scales within a trading day, indicating influences of market structure. We also report a persistent decrease in correlation strength of intertrade times with increasing average intertrade time and with corresponding decrease in companies' market capitalization-a trend which is less pronounced for NASDAQ stocks. Surprisingly, we observe that stronger power-law correlations in intertrade times are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of intertrade times and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ markets, we demonstrate that the stronger correlations we find in intertrade times for NASDAQ stocks are associated with stronger correlations in absolute price returns and with higher volatility, suggesting that market structure may affect price behavior through information contained in transaction timing. These findings do not support the hypothesis of universal scaling behavior in stock dynamics that is independent of company characteristics and stock market structure. Further, our results have implications for utilising transaction timing

  9. A Literature Survey of Early Time Series Classification and Deep Learning

    OpenAIRE

    Santos, Tiago; Kern, Roman

    2017-01-01

    This paper provides an overview of current literature on time series classification approaches, in particular of early time series classification. A very common and effective time series classification approach is the 1-Nearest Neighbor classier, with different distance measures such as the Euclidean or dynamic time warping distances. This paper starts by reviewing these baseline methods. More recently, with the gain in popularity in the application of deep neural networks to the eld of...

  10. Signal Processing for Time-Series Functions on a Graph

    Science.gov (United States)

    2018-02-01

    Figures Fig. 1 Time -series function on a fixed graph.............................................2 iv Approved for public release; distribution is...φi〉`2(V)φi (39) 6= f̄ (40) Instead, we simply recover the average of f over time . 13 Approved for public release; distribution is unlimited. This...ARL-TR-8276• FEB 2018 US Army Research Laboratory Signal Processing for Time -Series Functions on a Graph by Humberto Muñoz-Barona, Jean Vettel, and

  11. Non-linear time series extreme events and integer value problems

    CERN Document Server

    Turkman, Kamil Feridun; Zea Bermudez, Patrícia

    2014-01-01

    This book offers a useful combination of probabilistic and statistical tools for analyzing nonlinear time series. Key features of the book include a study of the extremal behavior of nonlinear time series and a comprehensive list of nonlinear models that address different aspects of nonlinearity. Several inferential methods, including quasi likelihood methods, sequential Markov Chain Monte Carlo Methods and particle filters, are also included so as to provide an overall view of the available tools for parameter estimation for nonlinear models. A chapter on integer time series models based on several thinning operations, which brings together all recent advances made in this area, is also included. Readers should have attended a prior course on linear time series, and a good grasp of simulation-based inferential methods is recommended. This book offers a valuable resource for second-year graduate students and researchers in statistics and other scientific areas who need a basic understanding of nonlinear time ...

  12. Learning of time series through neuron-to-neuron instruction

    Energy Technology Data Exchange (ETDEWEB)

    Miyazaki, Y [Department of Physics, Kyoto University, Kyoto 606-8502, (Japan); Kinzel, W [Institut fuer Theoretische Physik, Universitaet Wurzburg, 97074 Wurzburg (Germany); Shinomoto, S [Department of Physics, Kyoto University, Kyoto (Japan)

    2003-02-07

    A model neuron with delayline feedback connections can learn a time series generated by another model neuron. It has been known that some student neurons that have completed such learning under the instruction of a teacher's quasi-periodic sequence mimic the teacher's time series over a long interval, even after instruction has ceased. We found that in addition to such faithful students, there are unfaithful students whose time series eventually diverge exponentially from that of the teacher. In order to understand the circumstances that allow for such a variety of students, the orbit dimension was estimated numerically. The quasi-periodic orbits in question were found to be confined in spaces with dimensions significantly smaller than that of the full phase space.

  13. Learning of time series through neuron-to-neuron instruction

    International Nuclear Information System (INIS)

    Miyazaki, Y; Kinzel, W; Shinomoto, S

    2003-01-01

    A model neuron with delayline feedback connections can learn a time series generated by another model neuron. It has been known that some student neurons that have completed such learning under the instruction of a teacher's quasi-periodic sequence mimic the teacher's time series over a long interval, even after instruction has ceased. We found that in addition to such faithful students, there are unfaithful students whose time series eventually diverge exponentially from that of the teacher. In order to understand the circumstances that allow for such a variety of students, the orbit dimension was estimated numerically. The quasi-periodic orbits in question were found to be confined in spaces with dimensions significantly smaller than that of the full phase space

  14. Quirky patterns in time-series of estimates of recruitment could be artefacts

    DEFF Research Database (Denmark)

    Dickey-Collas, M.; Hinzen, N.T.; Nash, R.D.M.

    2015-01-01

    of recruitment time-series in databases is therefore not consistent across or within species and stocks. Caution is therefore required as perhaps the characteristics of the time-series of stock dynamics may be determined by the model used to generate them, rather than underlying ecological phenomena......The accessibility of databases of global or regional stock assessment outputs is leading to an increase in meta-analysis of the dynamics of fish stocks. In most of these analyses, each of the time-series is generally assumed to be directly comparable. However, the approach to stock assessment...... employed, and the associated modelling assumptions, can have an important influence on the characteristics of each time-series. We explore this idea by investigating recruitment time-series with three different recruitment parameterizations: a stock–recruitment model, a random-walk time-series model...

  15. The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure

    KAUST Repository

    Euán, Carolina

    2018-04-12

    We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms. The extent of similarity between a pair of time series is measured using the total variation distance between their estimated spectral densities. At each step of the algorithm, every time two clusters merge, a new spectral density is estimated using the whole information present in both clusters, which is representative of all the series in the new cluster. The method is implemented in an R package HSMClust. We present two applications of the HSM method, one to data coming from wave-height measurements in oceanography and the other to electroencefalogram (EEG) data.

  16. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  17. Inferring interdependencies from short time series

    Indian Academy of Sciences (India)

    Abstract. Complex networks provide an invaluable framework for the study of interlinked dynamical systems. In many cases, such networks are constructed from observed time series by first estimating the ...... does not quantify causal relations (unlike IOTA, or .... Africa_map_regions.svg, which is under public domain.

  18. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  19. A Time Series Analysis Using R for Understanding Car Sales On The Romanian Market

    Directory of Open Access Journals (Sweden)

    Mihaela Cornelia Sandu

    2015-09-01

    Full Text Available The size of the Romanian automobile industry is relative small compared to the main car producers in Europe and the world, but an analysis of its structure and dynamic appears to be most relevant given the strong linkages with the main macroeconomic indicators and important microeconomic variables at the level of the household.The paper presents a time series analysis for car sales in Romania, in the period 2007-2014, focusing on the sales dynamic of the national main producer– Dacia Pitesti. The aim of the investigation is twofold: to test the impact of macroeconomic variables on this important and underexplored segment of the economy and to emphasize potential differences between the factors influencing the buying decision for domestic versus foreign cars (observed in three regimes: new, registered and reenrolled. While the major influence of the global economic crisis cannot be ignored for the analyzed interval, we believe that it may also help to illustrate the real behaviors of individuals by setting the line between the immediate period after the crisis as treatment under scarcity conditions and the re-installment of normality towards the second half of the time interval. The results are confirming the general findings of the literature for the main indicators but they not entirely consistent with the rational economic models, especially with regard to the nature of the investigated goods (the cars – normal or positional.

  20. Using time-series intervention analysis to understand U.S. Medicaid expenditures on antidepressant agents.

    Science.gov (United States)

    Ferrand, Yann; Kelton, Christina M L; Guo, Jeff J; Levy, Martin S; Yu, Yan

    2011-03-01

    Medicaid programs' spending on antidepressants increased from $159 million in 1991 to $2 billion in 2005. The National Institute for Health Care Management attributed this expenditure growth to increases in drug utilization, entry of newer higher-priced antidepressants, and greater prescription drug insurance coverage. Rising enrollment in Medicaid has also contributed to this expenditure growth. This research examines the impact of specific events, including branded-drug and generic entry, a black box warning, direct-to-consumer advertising (DTCA), and new indication approval, on Medicaid spending on antidepressants. Using quarterly expenditure data for 1991-2005 from the national Medicaid pharmacy claims database maintained by the Centers for Medicare and Medicaid Services, a time-series autoregressive integrated moving average (ARIMA) intervention analysis was performed on 6 specific antidepressant drugs and on overall antidepressant spending. Twenty-nine potentially relevant interventions and their dates of occurrence were identified from the literature. Each was tested for an impact on the time series. Forecasts from the models were compared with a holdout sample of actual expenditure data. Interventions with significant impacts on Medicaid expenditures included the patent expiration of Prozac® (P0.05), implying that the expanding market for antidepressants overwhelmed the effect of generic competition. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  2. Forecasting the Reference Evapotranspiration Using Time Series Model

    Directory of Open Access Journals (Sweden)

    H. Zare Abyaneh

    2016-10-01

    Full Text Available Introduction: Reference evapotranspiration is one of the most important factors in irrigation timing and field management. Moreover, reference evapotranspiration forecasting can play a vital role in future developments. Therefore in this study, the seasonal autoregressive integrated moving average (ARIMA model was used to forecast the reference evapotranspiration time series in the Esfahan, Semnan, Shiraz, Kerman, and Yazd synoptic stations. Materials and Methods: In the present study in all stations (characteristics of the synoptic stations are given in Table 1, the meteorological data, including mean, maximum and minimum air temperature, relative humidity, dry-and wet-bulb temperature, dew-point temperature, wind speed, precipitation, air vapor pressure and sunshine hours were collected from the Islamic Republic of Iran Meteorological Organization (IRIMO for the 41 years from 1965 to 2005. The FAO Penman-Monteith equation was used to calculate the monthly reference evapotranspiration in the five synoptic stations and the evapotranspiration time series were formed. The unit root test was used to identify whether the time series was stationary, then using the Box-Jenkins method, seasonal ARIMA models were applied to the sample data. Table 1. The geographical location and climate conditions of the synoptic stations Station\tGeographical location\tAltitude (m\tMean air temperature (°C\tMean precipitation (mm\tClimate, according to the De Martonne index classification Longitude (E\tLatitude (N Annual\tMin. and Max. Esfahan\t51° 40'\t32° 37'\t1550.4\t16.36\t9.4-23.3\t122\tArid Semnan\t53° 33'\t35° 35'\t1130.8\t18.0\t12.4-23.8\t140\tArid Shiraz\t52° 36'\t29° 32'\t1484\t18.0\t10.2-25.9\t324\tSemi-arid Kerman\t56° 58'\t30° 15'\t1753.8\t15.6\t6.7-24.6\t142\tArid Yazd\t54° 17'\t31° 54'\t1237.2\t19.2\t11.8-26.0\t61\tArid Results and Discussion: The monthly meteorological data were used as input for the Ref-ET software and monthly reference

  3. Complexity testing techniques for time series data: A comprehensive literature review

    International Nuclear Information System (INIS)

    Tang, Ling; Lv, Huiling; Yang, Fengmei; Yu, Lean

    2015-01-01

    Highlights: • A literature review of complexity testing techniques for time series data is provided. • Complexity measurements can generally fall into fractality, methods derived from nonlinear dynamics and entropy. • Different types investigate time series data from different perspectives. • Measures, applications and future studies for each type are presented. - Abstract: Complexity may be one of the most important measurements for analysing time series data; it covers or is at least closely related to different data characteristics within nonlinear system theory. This paper provides a comprehensive literature review examining the complexity testing techniques for time series data. According to different features, the complexity measurements for time series data can be divided into three primary groups, i.e., fractality (mono- or multi-fractality) for self-similarity (or system memorability or long-term persistence), methods derived from nonlinear dynamics (via attractor invariants or diagram descriptions) for attractor properties in phase-space, and entropy (structural or dynamical entropy) for the disorder state of a nonlinear system. These estimations analyse time series dynamics from different perspectives but are closely related to or even dependent on each other at the same time. In particular, a weaker self-similarity, a more complex structure of attractor, and a higher-level disorder state of a system consistently indicate that the observed time series data are at a higher level of complexity. Accordingly, this paper presents a historical tour of the important measures and works for each group, as well as ground-breaking and recent applications and future research directions.

  4. Complex dynamic in ecological time series

    Science.gov (United States)

    Peter Turchin; Andrew D. Taylor

    1992-01-01

    Although the possibility of complex dynamical behaviors-limit cycles, quasiperiodic oscillations, and aperiodic chaos-has been recognized theoretically, most ecologists are skeptical of their importance in nature. In this paper we develop a methodology for reconstructing endogenous (or deterministic) dynamics from ecological time series. Our method consists of fitting...

  5. Time Series Modelling using Proc Varmax

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2007-01-01

    In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box...

  6. SensL B-Series and C-Series silicon photomultipliers for time-of-flight positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    O' Neill, K., E-mail: koneill@sensl.com; Jackson, C., E-mail: cjackson@sensl.com

    2015-07-01

    Silicon photomultipliers from SensL are designed for high performance, uniformity and low cost. They demonstrate peak photon detection efficiency of 41% at 420 nm, which is matched to the output spectrum of cerium doped lutetium orthosilicate. Coincidence resolving time of less than 220 ps is demonstrated. New process improvements have lead to the development of C-Series SiPM which reduces the dark noise by over an order of magnitude. In this paper we will show characterization test results which include photon detection efficiency, dark count rate, crosstalk probability, afterpulse probability and coincidence resolving time comparing B-Series to the newest pre-production C-Series. Additionally we will discuss the effect of silicon photomultiplier microcell size on coincidence resolving time allowing the optimal microcell size choice to be made for time of flight positron emission tomography systems.

  7. Kriging Methodology and Its Development in Forecasting Econometric Time Series

    Directory of Open Access Journals (Sweden)

    Andrej Gajdoš

    2017-03-01

    Full Text Available One of the approaches for forecasting future values of a time series or unknown spatial data is kriging. The main objective of the paper is to introduce a general scheme of kriging in forecasting econometric time series using a family of linear regression time series models (shortly named as FDSLRM which apply regression not only to a trend but also to a random component of the observed time series. Simultaneously performing a Monte Carlo simulation study with a real electricity consumption dataset in the R computational langure and environment, we investigate the well-known problem of “negative” estimates of variance components when kriging predictions fail. Our following theoretical analysis, including also the modern apparatus of advanced multivariate statistics, gives us the formulation and proof of a general theorem about the explicit form of moments (up to sixth order for a Gaussian time series observation. This result provides a basis for further theoretical and computational research in the kriging methodology development.

  8. Enabling demand response by extending the European electricity markets with a real-time market

    NARCIS (Netherlands)

    Nyeng, P.; Kok, K.; Pineda, S.; Grande, O.; Sprooten, J.; Hebb, B.; Nieuwenhout, F.

    2013-01-01

    The EcoGrid concept proposes to extend the current wholesale electricity market to allow participation of Distributed Energy Resources (DERs) and domestic end-consumers in system balancing. Taking advantage of the smart grid technology, the EcoGrid market publishes the real-time prices that entail

  9. Use of Time-Series, ARIMA Designs to Assess Program Efficacy.

    Science.gov (United States)

    Braden, Jeffery P.; And Others

    1990-01-01

    Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…

  10. An algorithm of Saxena-Easo on fuzzy time series forecasting

    Science.gov (United States)

    Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.

    2018-04-01

    This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.

  11. Transportation Energy Futures Series. Projected Biomass Utilization for Fuels and Power in a Mature Market

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Newes, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Aden, A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Warner, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Uriarte, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Inman, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Simpkins, T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Argo, A. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-03-01

    The viability of biomass as transportation fuel depends upon the allocation of limited resources for fuel, power, and products. By focusing on mature markets, this report identifies how biomass is projected to be most economically used in the long term and the implications for greenhouse gas (GHG) emissions and petroleum use. In order to better understand competition for biomass between these markets and the potential for biofuel as a market-scale alternative to petroleum-based fuels, this report presents results of a micro-economic analysis conducted using the Biomass Allocation and Supply Equilibrium (BASE) modeling tool. The findings indicate that biofuels can outcompete biopower for feedstocks in mature markets if research and development targets are met. The BASE tool was developed for this project to analyze the impact of multiple biomass demand areas on mature energy markets. The model includes domestic supply curves for lignocellulosic biomass resources, corn for ethanol and butanol production, soybeans for biodiesel, and algae for diesel. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  12. Transportation Energy Futures Series: Projected Biomass Utilization for Fuels and Power in a Mature Market

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Mai, T.; Newes, E.; Aden, A.; Warner, E.; Uriarte, C.; Inman, D.; Simpkins, T.; Argo, A.

    2013-03-01

    The viability of biomass as transportation fuel depends upon the allocation of limited resources for fuel, power, and products. By focusing on mature markets, this report identifies how biomass is projected to be most economically used in the long term and the implications for greenhouse gas (GHG) emissions and petroleum use. In order to better understand competition for biomass between these markets and the potential for biofuel as a market-scale alternative to petroleum-based fuels, this report presents results of a micro-economic analysis conducted using the Biomass Allocation and Supply Equilibrium (BASE) modeling tool. The findings indicate that biofuels can outcompete biopower for feedstocks in mature markets if research and development targets are met. The BASE tool was developed for this project to analyze the impact of multiple biomass demand areas on mature energy markets. The model includes domestic supply curves for lignocellulosic biomass resources, corn for ethanol and butanol production, soybeans for biodiesel, and algae for diesel. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  13. Evolutionary Algorithms for the Detection of Structural Breaks in Time Series

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid

    2013-01-01

    Detecting structural breaks is an essential task for the statistical analysis of time series, for example, for fitting parametric models to it. In short, structural breaks are points in time at which the behavior of the time series changes. Typically, no solid background knowledge of the time...

  14. ANALYSIS OF MARKET TIMING TOWARD LEVERAGE OF NON-FINANCIAL COMPANIES IN INDONESIA

    OpenAIRE

    Wulandari, Vera Pipin; Setiawan, Kusdhianto

    2015-01-01

    ABSTRACTThis study aimed to examine the effect of market timing on leverage on non-financial compa-nies in Indonesia. Market timing was tested on the hot and cold market conditions. Hot and cold markets are determined by the monthly market to book ratio. A hot (cold) market occurs when the average market to book ratio of a particular month is above (below) the value of the moving average of the monthly market to book ratio. This study also aimed to test whether non-financial companies in Indo...

  15. On modeling panels of time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractThis paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a

  16. Unsupervised Symbolization of Signal Time Series for Extraction of the Embedded Information

    Directory of Open Access Journals (Sweden)

    Yue Li

    2017-03-01

    Full Text Available This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete symbols from which the embedded dynamic information can be extracted in an unsupervised manner (i.e., no requirement for labeling of time series. The main challenges here are: (1 definition of the symbol assignment for the time series; (2 identification of the partitioning segment locations in the signal space of time series; and (3 construction of probabilistic finite-state automata (PFSA from the symbol strings that contain temporal patterns. The reported work addresses these challenges by maximizing the mutual information measures between symbol strings and PFSA states. The proposed symbolization method has been validated by numerical simulation as well as by experimentation in a laboratory environment. Performance of the proposed algorithm has been compared to that of two commonly used algorithms of time series partitioning.

  17. Classification of time-series images using deep convolutional neural networks

    Science.gov (United States)

    Hatami, Nima; Gavet, Yann; Debayle, Johan

    2018-04-01

    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.

  18. Long Range Dependence Prognostics for Bearing Vibration Intensity Chaotic Time Series

    Directory of Open Access Journals (Sweden)

    Qing Li

    2016-01-01

    Full Text Available According to the chaotic features and typical fractional order characteristics of the bearing vibration intensity time series, a forecasting approach based on long range dependence (LRD is proposed. In order to reveal the internal chaotic properties, vibration intensity time series are reconstructed based on chaos theory in phase-space, the delay time is computed with C-C method and the optimal embedding dimension and saturated correlation dimension are calculated via the Grassberger–Procaccia (G-P method, respectively, so that the chaotic characteristics of vibration intensity time series can be jointly determined by the largest Lyapunov exponent and phase plane trajectory of vibration intensity time series, meanwhile, the largest Lyapunov exponent is calculated by the Wolf method and phase plane trajectory is illustrated using Duffing-Holmes Oscillator (DHO. The Hurst exponent and long range dependence prediction method are proposed to verify the typical fractional order features and improve the prediction accuracy of bearing vibration intensity time series, respectively. Experience shows that the vibration intensity time series have chaotic properties and the LRD prediction method is better than the other prediction methods (largest Lyapunov, auto regressive moving average (ARMA and BP neural network (BPNN model in prediction accuracy and prediction performance, which provides a new approach for running tendency predictions for rotating machinery and provide some guidance value to the engineering practice.

  19. Critical values for unit root tests in seasonal time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); B. Hobijn (Bart)

    1997-01-01

    textabstractIn this paper, we present tables with critical values for a variety of tests for seasonal and non-seasonal unit roots in seasonal time series. We consider (extensions of) the Hylleberg et al. and Osborn et al. test procedures. These extensions concern time series with increasing seasonal

  20. Market Timing : A Decomposition of Mutual Fund Returns

    NARCIS (Netherlands)

    Swinkels, L.A.P.; van der Sluis, P.J.; Verbeek, M.J.C.M.

    2003-01-01

    We decompose the conditional expected mutual fund return in ve parts.Two parts, selectivity and expert market timing, can be attributed to manager skill, and three to variation in market exposure that can be achieved by private investors as well.The dynamic model that we use to estimate the relative

  1. Classification of time series patterns from complex dynamic systems

    Energy Technology Data Exchange (ETDEWEB)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately, the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.

  2. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  3. Fractal analysis and nonlinear forecasting of indoor 222Rn time series

    International Nuclear Information System (INIS)

    Pausch, G.; Bossew, P.; Hofmann, W.; Steger, F.

    1998-01-01

    Fractal analyses of indoor 222 Rn time series were performed using different chaos theory based measurements such as time delay method, Hurst's rescaled range analysis, capacity (fractal) dimension, and Lyapunov exponent. For all time series we calculated only positive Lyapunov exponents which is a hint to chaos, while the Hurst exponents were well below 0.5, indicating antipersistent behaviour (past trends tend to reverse in the future). These time series were also analyzed with a nonlinear prediction method which allowed an estimation of the embedding dimensions with some restrictions, limiting the prediction to about three relative time steps. (orig.)

  4. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  5. Testing for intracycle determinism in pseudoperiodic time series.

    Science.gov (United States)

    Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A

    2008-06-01

    A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.

  6. Correlations and clustering in wholesale electricity markets

    Science.gov (United States)

    Cui, Tianyu; Caravelli, Francesco; Ududec, Cozmin

    2018-02-01

    We study the structure of locational marginal prices in day-ahead and real-time wholesale electricity markets. In particular, we consider the case of two North American markets and show that the price correlations contain information on the locational structure of the grid. We study various clustering methods and introduce a type of correlation function based on event synchronization for spiky time series, and another based on string correlations of location names provided by the markets. This allows us to reconstruct aspects of the locational structure of the grid.

  7. Correlations and clustering in wholesale electricity markets

    International Nuclear Information System (INIS)

    Cui, Tianyu; Caravelli, Francesco; Ududec, Cozmin

    2017-01-01

    We study the structure of locational marginal prices in day-ahead and real-time wholesale electricity markets. In particular, we consider the case of two North American markets and show that the price correlations contain information on the locational structure of the grid. We study various clustering methods and introduce a type of correlation function based on event synchronization for spiky time series, and another based on string correlations of location names provided by the markets. As a result, this allows us to reconstruct aspects of the locational structure of the grid.

  8. Time series analysis and its applications with R examples

    CERN Document Server

    Shumway, Robert H

    2017-01-01

    The fourth edition of this popular graduate textbook, like its predecessors, presents a balanced and comprehensive treatment of both time and frequency domain methods with accompanying theory. Numerous examples using nontrivial data illustrate solutions to problems such as discovering natural and anthropogenic climate change, evaluating pain perception experiments using functional magnetic resonance imaging, and monitoring a nuclear test ban treaty. The book is designed as a textbook for graduate level students in the physical, biological, and social sciences and as a graduate level text in statistics. Some parts may also serve as an undergraduate introductory course. Theory and methodology are separated to allow presentations on different levels. In addition to coverage of classical methods of time series regression, ARIMA models, spectral analysis and state-space models, the text includes modern developments including categorical time series analysis, multivariate spectral methods, long memory series, nonli...

  9. A KST framework for correlation network construction from time series signals

    Science.gov (United States)

    Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping

    2018-04-01

    A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.

  10. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    Science.gov (United States)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be

  11. Forecasting daily meteorological time series using ARIMA and regression models

    Science.gov (United States)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  12. Analysis of complex time series using refined composite multiscale entropy

    International Nuclear Information System (INIS)

    Wu, Shuen-De; Wu, Chiu-Wen; Lin, Shiou-Gwo; Lee, Kung-Yen; Peng, Chung-Kang

    2014-01-01

    Multiscale entropy (MSE) is an effective algorithm for measuring the complexity of a time series that has been applied in many fields successfully. However, MSE may yield an inaccurate estimation of entropy or induce undefined entropy because the coarse-graining procedure reduces the length of a time series considerably at large scales. Composite multiscale entropy (CMSE) was recently proposed to improve the accuracy of MSE, but it does not resolve undefined entropy. Here we propose a refined composite multiscale entropy (RCMSE) to improve CMSE. For short time series analyses, we demonstrate that RCMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy.

  13. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  14. Tools for Generating Useful Time-series Data from PhenoCam Images

    Science.gov (United States)

    Milliman, T. E.; Friedl, M. A.; Frolking, S.; Hufkens, K.; Klosterman, S.; Richardson, A. D.; Toomey, M. P.

    2012-12-01

    The PhenoCam project (http://phenocam.unh.edu/) is tasked with acquiring, processing, and archiving digital repeat photography to be used for scientific studies of vegetation phenological processes. Over the past 5 years the PhenoCam project has collected over 2 million time series images for a total over 700 GB of image data. Several papers have been published describing derived "vegetation indices" (such as green-chromatic-coordinate or gcc) which can be compared to standard measures such as NDVI or EVI. Imagery from our archive is available for download but converting series of images for a particular camera into useful scientific data, while simple in principle, is complicated by a variety of factors. Cameras are often exposed to harsh weather conditions (high wind, rain, ice, snow pile up), which result in images where the field of view (FOV) is partially obscured or completely blocked for periods of time. The FOV can also change for other reasons (mount failures, tower maintenance, etc.) Some of the relatively inexpensive cameras that are being used can also temporarily lose color balance or exposure controls resulting in loss of imagery. All these factors negatively influence the automated analysis of the image time series making this a non-trivial task. Here we discuss the challenges of processing PhenoCam image time-series for vegetation monitoring and the associated data management tasks. We describe our current processing framework and a simple standardized output format for the resulting time-series data. The time-series data in this format will be generated for specific "regions of interest" (ROI's) for each of the cameras in the PhenoCam network. This standardized output (which will be updated daily) can be considered 'the pulse' of a particular camera and will provide a default phenological dynamic for said camera. The time-series data can also be viewed as a higher level product which can be used to generate "vegetation indices", like gcc, for

  15. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.

  16. Driving factors of interactions between the exchange rate market and the commodity market: A wavelet-based complex network perspective

    Science.gov (United States)

    Wen, Shaobo; An, Haizhong; Chen, Zhihua; Liu, Xueyong

    2017-08-01

    In traditional econometrics, a time series must be in a stationary sequence. However, it usually shows time-varying fluctuations, and it remains a challenge to execute a multiscale analysis of the data and discover the topological characteristics of conduction in different scales. Wavelet analysis and complex networks in physical statistics have special advantages in solving these problems. We select the exchange rate variable from the Chinese market and the commodity price index variable from the world market as the time series of our study. We explore the driving factors behind the behavior of the two markets and their topological characteristics in three steps. First, we use the Kalman filter to find the optimal estimation of the relationship between the two markets. Second, wavelet analysis is used to extract the scales of the relationship that are driven by different frequency wavelets. Meanwhile, we search for the actual economic variables corresponding to different frequency wavelets. Finally, a complex network is used to search for the transfer characteristics of the combination of states driven by different frequency wavelets. The results show that statistical physics have a unique advantage over traditional econometrics. The Chinese market has time-varying impacts on the world market: it has greater influence when the world economy is stable and less influence in times of turmoil. The process of forming the state combination is random. Transitions between state combinations have a clustering feature. Based on these characteristics, we can effectively reduce the information burden on investors and correctly respond to the government's policy mix.

  17. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  18. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  19. Performance Evaluation and Market Timing: the Skill Index

    Directory of Open Access Journals (Sweden)

    Ney Roberto Otoni de Brito

    2003-01-01

    Full Text Available MERTON (1981 examines the creation of value by fund managers selecting between stocks and fixed income instruments through market timing. HENRIKSON and MERTON (1981 proceed to propose empirical tests of funds and manager performance in market timing. BRITO, BONA and TACIRO (2003 generalize the results of MERTON (1981 and HENRIKSON and MERTON (1981 for actively managed funds with a clearly defined benchmark portfolio. In the generalized context of active portfolio management, this paper proposes a new index – the Skill Index of Brito (SIB – to measure the performance and efficiency in market timing of actively managed funds. The paper proceeds to test the performance and skill of hedge funds in Brazil using the SIB. A representative sample of 32 hedge funds with a window of 90 trading days on October 31, 1999 was obtained. The empirical tests of performance and skill use the interbank borrowing and lending rate as the passive benchmark. The results indicate the significance at the 5% level of the SIB for ten hedge funds in the sample. Among them seven funds also have shown significance at the 1% level. In sum the results indicate a majority of hedge funds with no significant skill in the Brazilian market in the examined period.

  20. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    Science.gov (United States)

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  1. Automated Bayesian model development for frequency detection in biological time series

    Directory of Open Access Journals (Sweden)

    Oldroyd Giles ED

    2011-06-01

    Full Text Available Abstract Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and

  2. Automated Bayesian model development for frequency detection in biological time series.

    Science.gov (United States)

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time

  3. hctsa: A Computational Framework for Automated Time-Series Phenotyping Using Massive Feature Extraction.

    Science.gov (United States)

    Fulcher, Ben D; Jones, Nick S

    2017-11-22

    Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Causality as a Rigorous Notion and Quantitative Causality Analysis with Time Series

    Science.gov (United States)

    Liang, X. S.

    2017-12-01

    Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Here we show that this important and challenging question (one of the major challenges in the science of big data), which is of interest in a wide variety of disciplines, has a positive answer. Particularly, for linear systems, the maximal likelihood estimator of the causality from a series X2 to another series X1, written T2→1, turns out to be concise in form: T2→1 = [C11 C12 C2,d1 — C112 C1,d1] / [C112 C22 — C11C122] where Cij (i,j=1,2) is the sample covariance between Xi and Xj, and Ci,dj the covariance between Xi and ΔXj/Δt, the difference approximation of dXj/dt using the Euler forward scheme. An immediate corollary is that causation implies correlation, but not vice versa, resolving the long-standing debate over causation versus correlation. The above formula has been validated with touchstone series purportedly generated with one-way causality that evades the classical approaches such as Granger causality test and transfer entropy analysis. It has also been applied successfully to the investigation of many real problems. Through a simple analysis with the stock series of IBM and GE, an unusually strong one-way causality is identified from the former to the latter in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a "Giant" for the computer market. Another example presented here regards the cause-effect relation between the two climate modes, El Niño and Indian Ocean Dipole (IOD). In general, these modes are mutually causal, but the causality is asymmetric. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean. In the third example, an unambiguous one-way causality is found between CO2 and the global mean temperature anomaly. While it is confirmed that CO2 indeed drives the recent global warming

  5. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  6. Evidence of the overconfidence bias in the Egyptian stock market in different market states

    Directory of Open Access Journals (Sweden)

    Ayman H. Metwally

    2015-11-01

    Full Text Available Traditional finance theories fail to explain several anomalies observed in security markets. High levels of market turnover are among the most challenging market puzzles that have been documented in many security markets. Several studies assert the correlation between past market return and current market turnover. Behavioral finance theories assume that overconfidence bias is the reason behind this relation. Hence, this paper aims to study the impact of overconfidence – a behavioral bias stemming from the second building block of behavioral finance “cognitive psychology” and affecting traders’ beliefs and thereby their trading behavior in form of excessive trading. DeBondt and Tahler (1995. The study tests the overconfidence bias in the Egyptian Stock market during the period from 2002 till 2012 on the aggregate market level trough examining the relation between market returns and market turnover in different market states, seeking to document or deny whether overconfidence bias encourages investors to trade or not . The whole period is divided into four sub periods; two tranquil upward trending (2005-2005 and (2005-2008 and two volatile and down ward trending (financial crisis 2008-2010 and the (Egyptian Revolution Period 2010-2012 A quantitative research using secondary data and applying time series statistical techniques is designed. The research is following Statman et al. (2006 methodology. Time series analysis, which is based on four statistical techniques; mainly Vector Auto Regression, Optimal Lag Selection, Impulse Response Function and Granger Causality Tests are being used. Market Turnover ratios are used as proxies for overconfidence. The research finds a significant impact of past market return on current turnover in lag1, then turns negative in lag 2, and returns back positive in lag3, then remains positive and significant until lag5. This is in line with the overconfidence and self-attribution theory of Denial et al

  7. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  8. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  9. Three-Factor Market-Timing Models with Fama and French's Spread Variables

    Directory of Open Access Journals (Sweden)

    Joanna Olbryś

    2010-01-01

    Full Text Available The traditional performance measurement literature has attempted to distinguish security selection, or stock-picking ability, from market-timing, or the ability to predict overall market returns. However, the literature finds that it is not easy to separate ability into such dichotomous categories. Some researchers have developed models that allow the decomposition of manager performance into market-timing and selectivity skills. The main goal of this paper is to present modified versions of classical market-timing models with Fama and French’s spread variables SMB and HML, in the case of Polish equity mutual funds. (original abstract

  10. A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.

    Science.gov (United States)

    Araújo, Ricardo de A

    2012-04-01

    Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. A Review of Some Aspects of Robust Inference for Time Series.

    Science.gov (United States)

    1984-09-01

    REVIEW OF SOME ASPECTSOF ROBUST INFERNCE FOR TIME SERIES by Ad . Dougla Main TE "iAL REPOW No. 63 Septermber 1984 Department of Statistics University of ...clear. One cannot hope to have a good method for dealing with outliers in time series by using only an instantaneous nonlinear transformation of the data...AI.49 716 A REVIEWd OF SOME ASPECTS OF ROBUST INFERENCE FOR TIME 1/1 SERIES(U) WASHINGTON UNIV SEATTLE DEPT OF STATISTICS R D MARTIN SEP 84 TR-53

  12. Investigation of multifractality in the Brazilian stock market

    Science.gov (United States)

    Maganini, Natália Diniz; Da Silva Filho, Antônio Carlos; Lima, Fabiano Guasti

    2018-05-01

    Many studies point to a possible new stylized fact for financial time series: the multifractality. Several authors have already detected this characteristic in multiple time series in several countries. With that in mind and based on Multifractal Detrended Fluctuation Analysis (MFDFA) method, this paper analyzes the multifractality in the Brazilian market. This analysis is performed with daily data from IBOVESPA index (Brazilian stock exchange's main index) and other four highly marketable stocks in the Brazilian market (VALE5, ITUB4, BBDC4 and CIEL3), which represent more than 25% of the index composition, making up 1961 observations for each asset in the period from June 26 2009 to May 31 2017. We found that the studied stock prices and Brazilian index are multifractal, but that the multifractality degree is not the same for all the assets. The use of shuffled and surrogated series indicates that for the period and the actions considered the long-range correlations do not strongly influence the multifractality, but the distribution (fat tails) exerts a possible influence on IBOVESPA and CIEL3.

  13. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  14. Clustering Multivariate Time Series Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Shima Ghassempour

    2014-03-01

    Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.

  15. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  16. Stochastic generation of hourly wind speed time series

    International Nuclear Information System (INIS)

    Shamshad, A.; Wan Mohd Ali Wan Hussin; Bawadi, M.A.; Mohd Sanusi, S.A.

    2006-01-01

    In the present study hourly wind speed data of Kuala Terengganu in Peninsular Malaysia are simulated by using transition matrix approach of Markovian process. The wind speed time series is divided into various states based on certain criteria. The next wind speed states are selected based on the previous states. The cumulative probability transition matrix has been formed in which each row ends with 1. Using the uniform random numbers between 0 and 1, a series of future states is generated. These states have been converted to the corresponding wind speed values using another uniform random number generator. The accuracy of the model has been determined by comparing the statistical characteristics such as average, standard deviation, root mean square error, probability density function and autocorrelation function of the generated data to those of the original data. The generated wind speed time series data is capable to preserve the wind speed characteristics of the observed data

  17. Causal strength induction from time series data.

    Science.gov (United States)

    Soo, Kevin W; Rottman, Benjamin M

    2018-04-01

    One challenge when inferring the strength of cause-effect relations from time series data is that the cause and/or effect can exhibit temporal trends. If temporal trends are not accounted for, a learner could infer that a causal relation exists when it does not, or even infer that there is a positive causal relation when the relation is negative, or vice versa. We propose that learners use a simple heuristic to control for temporal trends-that they focus not on the states of the cause and effect at a given instant, but on how the cause and effect change from one observation to the next, which we call transitions. Six experiments were conducted to understand how people infer causal strength from time series data. We found that participants indeed use transitions in addition to states, which helps them to reach more accurate causal judgments (Experiments 1A and 1B). Participants use transitions more when the stimuli are presented in a naturalistic visual format than a numerical format (Experiment 2), and the effect of transitions is not driven by primacy or recency effects (Experiment 3). Finally, we found that participants primarily use the direction in which variables change rather than the magnitude of the change for estimating causal strength (Experiments 4 and 5). Collectively, these studies provide evidence that people often use a simple yet effective heuristic for inferring causal strength from time series data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Interpretable Categorization of Heterogeneous Time Series Data

    Science.gov (United States)

    Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Silbermann, Joshua

    2017-01-01

    We analyze data from simulated aircraft encounters to validate and inform the development of a prototype aircraft collision avoidance system. The high-dimensional and heterogeneous time series dataset is analyzed to discover properties of near mid-air collisions (NMACs) and categorize the NMAC encounters. Domain experts use these properties to better organize and understand NMAC occurrences. Existing solutions either are not capable of handling high-dimensional and heterogeneous time series datasets or do not provide explanations that are interpretable by a domain expert. The latter is critical to the acceptance and deployment of safety-critical systems. To address this gap, we propose grammar-based decision trees along with a learning algorithm. Our approach extends decision trees with a grammar framework for classifying heterogeneous time series data. A context-free grammar is used to derive decision expressions that are interpretable, application-specific, and support heterogeneous data types. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to a simulated aircraft encounter dataset and evaluate the performance of four variants of our learning algorithm. The best algorithm is used to analyze and categorize near mid-air collisions in the aircraft encounter dataset. We describe each discovered category in detail and discuss its relevance to aircraft collision avoidance.

  19. From market games to real-world markets

    Science.gov (United States)

    Jefferies, P.; Hart, M. L.; Hui, P. M.; Johnson, N. F.

    2001-04-01

    This paper uses the development of multi-agent market models to present a unified approach to the joint questions of how financial market movements may be simulated, predicted, and hedged against. We first present the results of agent-based market simulations in which traders equipped with simple buy/sell strategies and limited information compete in speculatory trading. We examine the effect of different market clearing mechanisms and show that implementation of a simple Walrasian auction leads to unstable market dynamics. We then show that a more realistic out-of-equilibrium clearing process leads to dynamics that closely resemble real financial movements, with fat-tailed price increments, clustered volatility and high volume autocorrelation. We then show that replacing the `synthetic' price history used by these simulations with data taken from real financial time-series leads to the remarkable result that the agents can collectively learn to identify moments in the market where profit is attainable. Hence on real financial data, the system as a whole can perform better than random. We then employ the formalism of Bouchaud in conjunction with agent based models to show that in general risk cannot be eliminated from trading with these models. We also show that, in the presence of transaction costs, the risk of option writing is greatly increased. This risk, and the costs, can however be reduced through the use of a delta-hedging strategy with modified, time-dependent volatility structure.

  20. A cluster merging method for time series microarray with production values.

    Science.gov (United States)

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  1. Constructing networks from a dynamical system perspective for multivariate nonlinear time series.

    Science.gov (United States)

    Nakamura, Tomomichi; Tanizawa, Toshihiro; Small, Michael

    2016-03-01

    We describe a method for constructing networks for multivariate nonlinear time series. We approach the interaction between the various scalar time series from a deterministic dynamical system perspective and provide a generic and algorithmic test for whether the interaction between two measured time series is statistically significant. The method can be applied even when the data exhibit no obvious qualitative similarity: a situation in which the naive method utilizing the cross correlation function directly cannot correctly identify connectivity. To establish the connectivity between nodes we apply the previously proposed small-shuffle surrogate (SSS) method, which can investigate whether there are correlation structures in short-term variabilities (irregular fluctuations) between two data sets from the viewpoint of deterministic dynamical systems. The procedure to construct networks based on this idea is composed of three steps: (i) each time series is considered as a basic node of a network, (ii) the SSS method is applied to verify the connectivity between each pair of time series taken from the whole multivariate time series, and (iii) the pair of nodes is connected with an undirected edge when the null hypothesis cannot be rejected. The network constructed by the proposed method indicates the intrinsic (essential) connectivity of the elements included in the system or the underlying (assumed) system. The method is demonstrated for numerical data sets generated by known systems and applied to several experimental time series.

  2. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  3. Time-varying long term memory in the European Union stock markets

    Science.gov (United States)

    Sensoy, Ahmet; Tabak, Benjamin M.

    2015-10-01

    This paper proposes a new efficiency index to model time-varying inefficiency in stock markets. We focus on European stock markets and show that they have different degrees of time-varying efficiency. We observe that the 2008 global financial crisis has an adverse effect on almost all EU stock markets. However, the Eurozone sovereign debt crisis has a significant adverse effect only on the markets in France, Spain and Greece. For the late members, joining EU does not have a uniform effect on stock market efficiency. Our results have important implications for policy makers, investors, risk managers and academics.

  4. Reconstruction of tritium time series in precipitation

    International Nuclear Information System (INIS)

    Celle-Jeanton, H.; Gourcy, L.; Aggarwal, P.K.

    2002-01-01

    Tritium is commonly used in groundwaters studies to calculate the recharge rate and to identify the presence of a modern recharge. The knowledge of 3 H precipitation time series is then very important for the study of groundwater recharge. Rozanski and Araguas provided good information on precipitation tritium content in 180 stations of the GNIP network to the end of 1987, but it shows some lacks of measurements either within one chronicle or within one region (the Southern hemisphere for instance). Therefore, it seems to be essential to find a method to recalculate data for a region where no measurement is available.To solve this problem, we propose another method which is based on triangulation. It needs the knowledge of 3 H time series of 3 stations surrounding geographically the 4-th station for which tritium input curve has to be reconstructed

  5. Time Series, Stochastic Processes and Completeness of Quantum Theory

    International Nuclear Information System (INIS)

    Kupczynski, Marian

    2011-01-01

    Most of physical experiments are usually described as repeated measurements of some random variables. Experimental data registered by on-line computers form time series of outcomes. The frequencies of different outcomes are compared with the probabilities provided by the algorithms of quantum theory (QT). In spite of statistical predictions of QT a claim was made that it provided the most complete description of the data and of the underlying physical phenomena. This claim could be easily rejected if some fine structures, averaged out in the standard descriptive statistical analysis, were found in time series of experimental data. To search for these structures one has to use more subtle statistical tools which were developed to study time series produced by various stochastic processes. In this talk we review some of these tools. As an example we show how the standard descriptive statistical analysis of the data is unable to reveal a fine structure in a simulated sample of AR (2) stochastic process. We emphasize once again that the violation of Bell inequalities gives no information on the completeness or the non locality of QT. The appropriate way to test the completeness of quantum theory is to search for fine structures in time series of the experimental data by means of the purity tests or by studying the autocorrelation and partial autocorrelation functions.

  6. Efficient use of correlation entropy for analysing time series data

    Indian Academy of Sciences (India)

    Abstract. The correlation dimension D2 and correlation entropy K2 are both important quantifiers in nonlinear time series analysis. However, use of D2 has been more common compared to K2 as a discriminating measure. One reason for this is that D2 is a static measure and can be easily evaluated from a time series.

  7. European wood pellet market integration - A study of the residential sector

    International Nuclear Information System (INIS)

    Olsson, Olle; Hillring, Bengt; Vinterbaeck, Johan

    2011-01-01

    The integration of European energy markets is a key goal of EU energy policy, and has also been the focal point of many scientific studies in recent years. International markets for coal, oil, natural gas and electricity have previously been investigated in order to determine the extent of the respective markets. This study enhances this field of research to bioenergy markets. Price series data and time series econometrics are used to determine whether residential sector wood pellet markets of Austria, Germany and Sweden are integrated. The results of the econometric tests show that the German and Austrian markets can be considered to be integrated, whereas the Swedish market is separate from the other two countries. Although increased internationalization of wood pellet markets is likely to contribute to European price convergence and market integration, this process is far from completed. (author)

  8. Classification of biosensor time series using dynamic time warping: applications in screening cancer cells with characteristic biomarkers.

    Science.gov (United States)

    Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji

    2016-01-01

    The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.

  9. Marketing communication expenditures and financial capital—the impact of marketing as an option

    NARCIS (Netherlands)

    Hodgson, V.L.; Hodgson, A.

    2008-01-01

    This paper examines the financial effectiveness of marketing communication expenditure (MCE) as an instrument to increase risk-weighted capital. We nest a cross-sectional time-series panel model within the risk-adjusted earnings principles of Ohlson (1995), and apply the model to a dataset of NSW

  10. Planning the Marketing Strategy. PACE Revised. Level 1. Unit 6. Research & Development Series No. 240AB6.

    Science.gov (United States)

    Ashmore, M. Catherine; Pritz, Sandra G.

    This lesson on planning a marketing strategy, the sixth in a series of 18 units, is part of the first level of a comprehensive entrepreneurship curriculum entitled: A Program for Acquiring Competence in Entrepreneurship (PACE). (Designed for use with secondary students, the first level of PACE introduces students to the concepts involved in…

  11. Planning the Marketing Strategy. PACE Revised. Level 2. Unit 6. Research & Development Series No. 240BB6.

    Science.gov (United States)

    Ashmore, M. Catherine; Pritz, Sandra G.

    This unit on planning marketing strategy for a small business, the sixth in a series of 18 modules, is on the second level of the revised PACE (Program for Acquiring Competence in Entrepreneurship) comprehensive curriculum. Geared to advanced secondary and beginning postsecondary or adult students, the modules provide an opportunity to learn about…

  12. A novel water quality data analysis framework based on time-series data mining.

    Science.gov (United States)

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Development and application of a modified dynamic time warping algorithm (DTW-S to analyses of primate brain expression time series

    Directory of Open Access Journals (Sweden)

    Vingron Martin

    2011-08-01

    Full Text Available Abstract Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  14. PhilDB: the time series database with built-in change logging

    Directory of Open Access Journals (Sweden)

    Andrew MacDonald

    2016-03-01

    Full Text Available PhilDB is an open-source time series database that supports storage of time series datasets that are dynamic; that is, it records updates to existing values in a log as they occur. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. It implements fast reads to make it practical to select data for analysis. Recent open-source systems have been developed to indefinitely store long-period high-resolution time series data without change logging. Unfortunately, such systems generally require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a ‘big data’ approach to storage and access. Other open-source projects for handling time series data that avoid the ‘big data’ approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that change. Unlike ‘big data’ solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances when a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries. While some existing systems come close to meeting the needs PhilDB addresses, none cover all the needs at once. PhilDB was written to fill this gap in existing solutions. This paper explores existing time

  15. Carbon-dioxide emissions trading and hierarchical structure in worldwide finance and commodities markets.

    Science.gov (United States)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel N; Stanley, H Eugene

    2013-01-01

    In a highly interdependent economic world, the nature of relationships between financial entities is becoming an increasingly important area of study. Recently, many studies have shown the usefulness of minimal spanning trees (MST) in extracting interactions between financial entities. Here, we propose a modified MST network whose metric distance is defined in terms of cross-correlation coefficient absolute values, enabling the connections between anticorrelated entities to manifest properly. We investigate 69 daily time series, comprising three types of financial assets: 28 stock market indicators, 21 currency futures, and 20 commodity futures. We show that though the resulting MST network evolves over time, the financial assets of similar type tend to have connections which are stable over time. In addition, we find a characteristic time lag between the volatility time series of the stock market indicators and those of the EU CO(2) emission allowance (EUA) and crude oil futures (WTI). This time lag is given by the peak of the cross-correlation function of the volatility time series EUA (or WTI) with that of the stock market indicators, and is markedly different (>20 days) from 0, showing that the volatility of stock market indicators today can predict the volatility of EU emissions allowances and of crude oil in the near future.

  16. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    Energy Technology Data Exchange (ETDEWEB)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.; Goldstein, Richard

    2015-10-01

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.

  17. Estimation of system parameters in discrete dynamical systems from time series

    International Nuclear Information System (INIS)

    Palaniyandi, P.; Lakshmanan, M.

    2005-01-01

    We propose a simple method to estimate the parameters involved in discrete dynamical systems from time series. The method is based on the concept of controlling chaos by constant feedback. The major advantages of the method are that it needs a minimal number of time series data (either vector or scalar) and is applicable to dynamical systems of any dimension. The method also works extremely well even in the presence of noise in the time series. The method is specifically illustrated by means of logistic and Henon maps

  18. Evaluation of nonlinearity and validity of nonlinear modeling for complex time series.

    Science.gov (United States)

    Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo

    2007-10-01

    Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.

  19. Scaling Exponents in Financial Markets

    Science.gov (United States)

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong

    2007-03-01

    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  20. Using Computer Techniques To Predict OPEC Oil Prices For Period 2000 To 2015 By Time-Series Methods

    Directory of Open Access Journals (Sweden)

    Mohammad Esmail Ahmad

    2015-08-01

    Full Text Available The instability in the world and OPEC oil process results from many factors through a long time. The problems can be summarized as that the oil exports dont constitute a large share of N.I. only but it also makes up most of the saving of the oil states. The oil prices affect their market through the interaction of supply and demand forces of oil. The research hypothesis states that the movement of oil prices caused shocks crises and economic problems. These shocks happen due to changes in oil prices need to make a prediction within the framework of economic planning in a short run period in order to avoid shocks through using computer techniques by time series models.

  1. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    Science.gov (United States)

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  2. To Market, To Market--Careers in the Online Industry. . .Fifth in a Series.

    Science.gov (United States)

    Kremin, Michael C.

    1985-01-01

    Reviews demand for marketing personnel in online industry and provides brief descriptions of generic positions which include information on background and experience needed: vice president of marketing, sales manager, sales representative, advertising manager, product manager, marketing research manager, distribution manager, service manager,…

  3. (IAM Series No 005) Are “Market Neutral” Hedge Funds Really Market Neutral?

    OpenAIRE

    Andrew Patton

    2004-01-01

    One can consider the concept of market neutrality for hedge funds as having breadth and depth: breadth reflects the number of market risks to which a fund is neutral, while depth reflects the completeness of the neutrality of the fund to market risks. We focus on market neutrality depth, and propose five different neutrality concepts. Mean neutrality nests the standard correlation-based definition of neutrality. Variance neutrality, Value-at-Risk neutrality and tail neutrality all relate to t...

  4. Incomplete Continuous-time Securities Markets with Stochastic Income Volatility

    DEFF Research Database (Denmark)

    Christensen, Peter Ove; Larsen, Kasper

    2014-01-01

    We derive closed-form solutions for the equilibrium interest rate and market price of risk processes in an incomplete continuous-time market with uncertainty generated by Brownian motions. The economy has a finite number of heterogeneous exponential utility investors, who receive partially...

  5. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  6. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  7. vector bilinear autoregressive time series model and its superiority

    African Journals Online (AJOL)

    KEYWORDS: Linear time series, Autoregressive process, Autocorrelation function, Partial autocorrelation function,. Vector time .... important result on matrix algebra with respect to the spectral ..... application to covariance analysis of super-.

  8. Correlation measure to detect time series distances, whence economy globalization

    Science.gov (United States)

    Miśkiewicz, Janusz; Ausloos, Marcel

    2008-11-01

    An instantaneous time series distance is defined through the equal time correlation coefficient. The idea is applied to the Gross Domestic Product (GDP) yearly increments of 21 rich countries between 1950 and 2005 in order to test the process of economic globalisation. Some data discussion is first presented to decide what (EKS, GK, or derived) GDP series should be studied. Distances are then calculated from the correlation coefficient values between pairs of series. The role of time averaging of the distances over finite size windows is discussed. Three network structures are next constructed based on the hierarchy of distances. It is shown that the mean distance between the most developed countries on several networks actually decreases in time, -which we consider as a proof of globalization. An empirical law is found for the evolution after 1990, similar to that found in flux creep. The optimal observation time window size is found ≃15 years.

  9. On Chaotic Nature of the Emerging European Forex Markets

    Directory of Open Access Journals (Sweden)

    Anoop S Kumar

    2014-06-01

    Full Text Available This study attempts to analyze the presence deterministic chaos in the forex markets of select European countries namely Bulgaria, Croatia, Czech Republic, Hungary Poland, Romania, Russia, Slovakia and Slovenia. Monthly NEER data ranging from jan-1994 to Dec-2013 is used for the purpose of analysis. A two step methodology is employed where in the first step, non-linear dependence structure in the underlying time series is verified using BDS test. The results show that all the markets under study exhibit non-linear dependence. In the next stage, it is enquired whether this non-linear behavior is due to the presence of chaotic dynamics in the markets. This is achieved by estimating Lyapunov exponents for the time series under analysis. An EGARCH (1, 1 filter is applied to see if the non-linearity could be explained by a GARCH process. From the Lyapunov exponent values, it is found that the GARCH process is unable to explain the forex markets behavior in a satisfying manner. It is concluded that the forex markets under study exhibit deterministic chaotic behavior.

  10. Time Series Analysis Using Geometric Template Matching.

    Science.gov (United States)

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data.

  11. Exploring heterogeneous market hypothesis using realized volatility

    Science.gov (United States)

    Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari

    2013-04-01

    This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.

  12. On-line analysis of reactor noise using time-series analysis

    International Nuclear Information System (INIS)

    McGevna, V.G.

    1981-10-01

    A method to allow use of time series analysis for on-line noise analysis has been developed. On-line analysis of noise in nuclear power reactors has been limited primarily to spectral analysis and related frequency domain techniques. Time series analysis has many distinct advantages over spectral analysis in the automated processing of reactor noise. However, fitting an autoregressive-moving average (ARMA) model to time series data involves non-linear least squares estimation. Unless a high speed, general purpose computer is available, the calculations become too time consuming for on-line applications. To eliminate this problem, a special purpose algorithm was developed for fitting ARMA models. While it is based on a combination of steepest descent and Taylor series linearization, properties of the ARMA model are used so that the auto- and cross-correlation functions can be used to eliminate the need for estimating derivatives. The number of calculations, per iteration varies lineegardless of the mee 0.2% yield strength displayed anisotropy, with axial and circumferential values being greater than radial. For CF8-CPF8 and CF8M-CPF8M castings to meet current ASME Code S acid fuel cells

  13. Improving GNSS time series for volcano monitoring: application to Canary Islands (Spain)

    Science.gov (United States)

    García-Cañada, Laura; Sevilla, Miguel J.; Pereda de Pablo, Jorge; Domínguez Cerdeña, Itahiza

    2017-04-01

    The number of permanent GNSS stations has increased significantly in recent years for different geodetic applications such as volcano monitoring, which require a high precision. Recently we have started to have coordinates time series long enough so that we can apply different analysis and filters that allow us to improve the GNSS coordinates results. Following this idea we have processed data from GNSS permanent stations used by the Spanish Instituto Geográfico Nacional (IGN) for volcano monitoring in Canary Islands to obtained time series by double difference processing method with Bernese v5.0 for the period 2007-2014. We have identified the characteristics of these time series and obtained models to estimate velocities with greater accuracy and more realistic uncertainties. In order to improve the results we have used two kinds of filters to improve the time series. The first, a spatial filter, has been computed using the series of residuals of all stations in the Canary Islands without an anomalous behaviour after removing a linear trend. This allows us to apply this filter to all sets of coordinates of the permanent stations reducing their dispersion. The second filter takes account of the temporal correlation in the coordinate time series for each station individually. A research about the evolution of the velocity depending on the series length has been carried out and it has demonstrated the need for using time series of at least four years. Therefore, in those stations with more than four years of data, we calculated the velocity and the characteristic parameters in order to have time series of residuals. This methodology has been applied to the GNSS data network in El Hierro (Canary Islands) during the 2011-2012 eruption and the subsequent magmatic intrusions (2012-2014). The results show that in the new series it is easier to detect anomalous behaviours in the coordinates, so they are most useful to detect crustal deformations in volcano monitoring.

  14. Complexity analysis of the turbulent environmental fluid flow time series

    Science.gov (United States)

    Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.

    2014-02-01

    We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.

  15. Mapping Crop Cycles in China Using MODIS-EVI Time Series

    Directory of Open Access Journals (Sweden)

    Le Li

    2014-03-01

    Full Text Available As the Earth’s population continues to grow and demand for food increases, the need for improved and timely information related to the properties and dynamics of global agricultural systems is becoming increasingly important. Global land cover maps derived from satellite data provide indispensable information regarding the geographic distribution and areal extent of global croplands. However, land use information, such as cropping intensity (defined here as the number of cropping cycles per year, is not routinely available over large areas because mapping this information from remote sensing is challenging. In this study, we present a simple but efficient algorithm for automated mapping of cropping intensity based on data from NASA’s (NASA: The National Aeronautics and Space Administration MODerate Resolution Imaging Spectroradiometer (MODIS. The proposed algorithm first applies an adaptive Savitzky-Golay filter to smooth Enhanced Vegetation Index (EVI time series derived from MODIS surface reflectance data. It then uses an iterative moving-window methodology to identify cropping cycles from the smoothed EVI time series. Comparison of results from our algorithm with national survey data at both the provincial and prefectural level in China show that the algorithm provides estimates of gross sown area that agree well with inventory data. Accuracy assessment comparing visually interpreted time series with algorithm results for a random sample of agricultural areas in China indicates an overall accuracy of 91.0% for three classes defined based on the number of cycles observed in EVI time series. The algorithm therefore appears to provide a straightforward and efficient method for mapping cropping intensity from MODIS time series data.

  16. Spectral Estimation of UV-Vis Absorbance Time Series for Water Quality Monitoring

    Directory of Open Access Journals (Sweden)

    Leonardo Plazas-Nossa

    2017-05-01

    Full Text Available Context: Signals recorded as multivariate time series by UV-Vis absorbance captors installed in urban sewer systems, can be non-stationary, yielding complications in the analysis of water quality monitoring. This work proposes to perform spectral estimation using the Box-Cox transformation and differentiation in order to obtain stationary multivariate time series in a wide sense. Additionally, Principal Component Analysis (PCA is applied to reduce their dimensionality. Method: Three different UV-Vis absorbance time series for different Colombian locations were studied: (i El-Salitre Wastewater Treatment Plant (WWTP in Bogotá; (ii Gibraltar Pumping Station (GPS in Bogotá; and (iii San-Fernando WWTP in Itagüí. Each UV-Vis absorbance time series had equal sample number (5705. The esti-mation of the spectral power density is obtained using the average of modified periodograms with rectangular window and an overlap of 50%, with the 20 most important harmonics from the Discrete Fourier Transform (DFT and Inverse Fast Fourier Transform (IFFT. Results: Absorbance time series dimensionality reduction using PCA, resulted in 6, 8 and 7 principal components for each study site respectively, altogether explaining more than 97% of their variability. Values of differences below 30% for the UV range were obtained for the three study sites, while for the visible range the maximum differences obtained were: (i 35% for El-Salitre WWTP; (ii 61% for GPS; and (iii 75% for San-Fernando WWTP. Conclusions: The Box-Cox transformation and the differentiation process applied to the UV-Vis absorbance time series for the study sites (El-Salitre, GPS and San-Fernando, allowed to reduce variance and to eliminate ten-dency of the time series. A pre-processing of UV-Vis absorbance time series is recommended to detect and remove outliers and then apply the proposed process for spectral estimation. Language: Spanish.

  17. Toward automatic time-series forecasting using neural networks.

    Science.gov (United States)

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  18. US stock market efficiency over weekly, monthly, quarterly and yearly time scales

    Science.gov (United States)

    Rodriguez, E.; Aguilar-Cornejo, M.; Femat, R.; Alvarez-Ramirez, J.

    2014-11-01

    In financial markets, the weak form of the efficient market hypothesis implies that price returns are serially uncorrelated sequences. In other words, prices should follow a random walk behavior. Recent developments in evolutionary economic theory (Lo, 2004) have tailored the concept of adaptive market hypothesis (AMH) by proposing that market efficiency is not an all-or-none concept, but rather market efficiency is a characteristic that varies continuously over time and across markets. Within the AMH framework, this work considers the Dow Jones Index Average (DJIA) for studying the deviations from the random walk behavior over time. It is found that the market efficiency also varies over different time scales, from weeks to years. The well-known detrended fluctuation analysis was used for the characterization of the serial correlations of the return sequences. The results from the empirical showed that interday and intraday returns are more serially correlated than overnight returns. Also, some insights in the presence of business cycles (e.g., Juglar and Kuznets) are provided in terms of time variations of the scaling exponent.

  19. Quantifying and modeling long-range cross correlations in multiple time series with applications to world stock indices.

    Science.gov (United States)

    Wang, Duan; Podobnik, Boris; Horvatić, Davor; Stanley, H Eugene

    2011-04-01

    We propose a modified time lag random matrix theory in order to study time-lag cross correlations in multiple time series. We apply the method to 48 world indices, one for each of 48 different countries. We find long-range power-law cross correlations in the absolute values of returns that quantify risk, and find that they decay much more slowly than cross correlations between the returns. The magnitude of the cross correlations constitutes "bad news" for international investment managers who may believe that risk is reduced by diversifying across countries. We find that when a market shock is transmitted around the world, the risk decays very slowly. We explain these time-lag cross correlations by introducing a global factor model (GFM) in which all index returns fluctuate in response to a single global factor. For each pair of individual time series of returns, the cross correlations between returns (or magnitudes) can be modeled with the autocorrelations of the global factor returns (or magnitudes). We estimate the global factor using principal component analysis, which minimizes the variance of the residuals after removing the global trend. Using random matrix theory, a significant fraction of the world index cross correlations can be explained by the global factor, which supports the utility of the GFM. We demonstrate applications of the GFM in forecasting risks at the world level, and in finding uncorrelated individual indices. We find ten indices that are practically uncorrelated with the global factor and with the remainder of the world indices, which is relevant information for world managers in reducing their portfolio risk. Finally, we argue that this general method can be applied to a wide range of phenomena in which time series are measured, ranging from seismology and physiology to atmospheric geophysics.

  20. Quantifying and modeling long-range cross correlations in multiple time series with applications to world stock indices

    Science.gov (United States)

    Wang, Duan; Podobnik, Boris; Horvatić, Davor; Stanley, H. Eugene

    2011-04-01

    We propose a modified time lag random matrix theory in order to study time-lag cross correlations in multiple time series. We apply the method to 48 world indices, one for each of 48 different countries. We find long-range power-law cross correlations in the absolute values of returns that quantify risk, and find that they decay much more slowly than cross correlations between the returns. The magnitude of the cross correlations constitutes “bad news” for international investment managers who may believe that risk is reduced by diversifying across countries. We find that when a market shock is transmitted around the world, the risk decays very slowly. We explain these time-lag cross correlations by introducing a global factor model (GFM) in which all index returns fluctuate in response to a single global factor. For each pair of individual time series of returns, the cross correlations between returns (or magnitudes) can be modeled with the autocorrelations of the global factor returns (or magnitudes). We estimate the global factor using principal component analysis, which minimizes the variance of the residuals after removing the global trend. Using random matrix theory, a significant fraction of the world index cross correlations can be explained by the global factor, which supports the utility of the GFM. We demonstrate applications of the GFM in forecasting risks at the world level, and in finding uncorrelated individual indices. We find ten indices that are practically uncorrelated with the global factor and with the remainder of the world indices, which is relevant information for world managers in reducing their portfolio risk. Finally, we argue that this general method can be applied to a wide range of phenomena in which time series are measured, ranging from seismology and physiology to atmospheric geophysics.